08:18:20 Started by upstream project "policy-pap-master-merge-java" build number 352 08:18:20 originally caused by: 08:18:20 Triggered by Gerrit: https://gerrit.onap.org/r/c/policy/pap/+/137774 08:18:20 Running as SYSTEM 08:18:20 [EnvInject] - Loading node environment variables. 08:18:20 Building remotely on prd-ubuntu1804-docker-8c-8g-35271 (ubuntu1804-docker-8c-8g) in workspace /w/workspace/policy-pap-master-project-csit-pap 08:18:20 [ssh-agent] Looking for ssh-agent implementation... 08:18:20 [ssh-agent] Exec ssh-agent (binary ssh-agent on a remote machine) 08:18:20 $ ssh-agent 08:18:20 SSH_AUTH_SOCK=/tmp/ssh-rM8zFjTDPBtL/agent.2140 08:18:20 SSH_AGENT_PID=2142 08:18:20 [ssh-agent] Started. 08:18:20 Running ssh-add (command line suppressed) 08:18:20 Identity added: /w/workspace/policy-pap-master-project-csit-pap@tmp/private_key_14149749062329732618.key (/w/workspace/policy-pap-master-project-csit-pap@tmp/private_key_14149749062329732618.key) 08:18:20 [ssh-agent] Using credentials onap-jobbuiler (Gerrit user) 08:18:20 The recommended git tool is: NONE 08:18:22 using credential onap-jenkins-ssh 08:18:22 Wiping out workspace first. 08:18:22 Cloning the remote Git repository 08:18:22 Cloning repository git://cloud.onap.org/mirror/policy/docker.git 08:18:22 > git init /w/workspace/policy-pap-master-project-csit-pap # timeout=10 08:18:22 Fetching upstream changes from git://cloud.onap.org/mirror/policy/docker.git 08:18:22 > git --version # timeout=10 08:18:22 > git --version # 'git version 2.17.1' 08:18:22 using GIT_SSH to set credentials Gerrit user 08:18:22 Verifying host key using manually-configured host key entries 08:18:22 > git fetch --tags --progress -- git://cloud.onap.org/mirror/policy/docker.git +refs/heads/*:refs/remotes/origin/* # timeout=30 08:18:22 > git config remote.origin.url git://cloud.onap.org/mirror/policy/docker.git # timeout=10 08:18:22 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10 08:18:23 Avoid second fetch 08:18:23 > git rev-parse refs/remotes/origin/master^{commit} # timeout=10 08:18:23 Checking out Revision 0d7c8284756c9a15d526c2d282cfc1dfd1595ffb (refs/remotes/origin/master) 08:18:23 > git config core.sparsecheckout # timeout=10 08:18:23 > git checkout -f 0d7c8284756c9a15d526c2d282cfc1dfd1595ffb # timeout=30 08:18:23 Commit message: "Update snapshot and/or references of policy/docker to latest snapshots" 08:18:23 > git rev-list --no-walk 0d7c8284756c9a15d526c2d282cfc1dfd1595ffb # timeout=10 08:18:23 provisioning config files... 08:18:23 copy managed file [npmrc] to file:/home/jenkins/.npmrc 08:18:23 copy managed file [pipconf] to file:/home/jenkins/.config/pip/pip.conf 08:18:23 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins13866659602178692332.sh 08:18:23 ---> python-tools-install.sh 08:18:23 Setup pyenv: 08:18:23 * system (set by /opt/pyenv/version) 08:18:23 * 3.8.13 (set by /opt/pyenv/version) 08:18:23 * 3.9.13 (set by /opt/pyenv/version) 08:18:23 * 3.10.6 (set by /opt/pyenv/version) 08:18:28 lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-WerH 08:18:28 lf-activate-venv(): INFO: Save venv in file: /tmp/.os_lf_venv 08:18:31 lf-activate-venv(): INFO: Installing: lftools 08:19:06 lf-activate-venv(): INFO: Adding /tmp/venv-WerH/bin to PATH 08:19:06 Generating Requirements File 08:19:33 Python 3.10.6 08:19:33 pip 24.0 from /tmp/venv-WerH/lib/python3.10/site-packages/pip (python 3.10) 08:19:34 appdirs==1.4.4 08:19:34 argcomplete==3.3.0 08:19:34 aspy.yaml==1.3.0 08:19:34 attrs==23.2.0 08:19:34 autopage==0.5.2 08:19:34 beautifulsoup4==4.12.3 08:19:34 boto3==1.34.92 08:19:34 botocore==1.34.92 08:19:34 bs4==0.0.2 08:19:34 cachetools==5.3.3 08:19:34 certifi==2024.2.2 08:19:34 cffi==1.16.0 08:19:34 cfgv==3.4.0 08:19:34 chardet==5.2.0 08:19:34 charset-normalizer==3.3.2 08:19:34 click==8.1.7 08:19:34 cliff==4.6.0 08:19:34 cmd2==2.4.3 08:19:34 cryptography==3.3.2 08:19:34 debtcollector==3.0.0 08:19:34 decorator==5.1.1 08:19:34 defusedxml==0.7.1 08:19:34 Deprecated==1.2.14 08:19:34 distlib==0.3.8 08:19:34 dnspython==2.6.1 08:19:34 docker==4.2.2 08:19:34 dogpile.cache==1.3.2 08:19:34 email_validator==2.1.1 08:19:34 filelock==3.13.4 08:19:34 future==1.0.0 08:19:34 gitdb==4.0.11 08:19:34 GitPython==3.1.43 08:19:34 google-auth==2.29.0 08:19:34 httplib2==0.22.0 08:19:34 identify==2.5.36 08:19:34 idna==3.7 08:19:34 importlib-resources==1.5.0 08:19:34 iso8601==2.1.0 08:19:34 Jinja2==3.1.3 08:19:34 jmespath==1.0.1 08:19:34 jsonpatch==1.33 08:19:34 jsonpointer==2.4 08:19:34 jsonschema==4.21.1 08:19:34 jsonschema-specifications==2023.12.1 08:19:34 keystoneauth1==5.6.0 08:19:34 kubernetes==29.0.0 08:19:34 lftools==0.37.10 08:19:34 lxml==5.2.1 08:19:34 MarkupSafe==2.1.5 08:19:34 msgpack==1.0.8 08:19:34 multi_key_dict==2.0.3 08:19:34 munch==4.0.0 08:19:34 netaddr==1.2.1 08:19:34 netifaces==0.11.0 08:19:34 niet==1.4.2 08:19:34 nodeenv==1.8.0 08:19:34 oauth2client==4.1.3 08:19:34 oauthlib==3.2.2 08:19:34 openstacksdk==3.1.0 08:19:34 os-client-config==2.1.0 08:19:34 os-service-types==1.7.0 08:19:34 osc-lib==3.0.1 08:19:34 oslo.config==9.4.0 08:19:34 oslo.context==5.5.0 08:19:34 oslo.i18n==6.3.0 08:19:34 oslo.log==5.5.1 08:19:34 oslo.serialization==5.4.0 08:19:34 oslo.utils==7.1.0 08:19:34 packaging==24.0 08:19:34 pbr==6.0.0 08:19:34 platformdirs==4.2.1 08:19:34 prettytable==3.10.0 08:19:34 pyasn1==0.6.0 08:19:34 pyasn1_modules==0.4.0 08:19:34 pycparser==2.22 08:19:34 pygerrit2==2.0.15 08:19:34 PyGithub==2.3.0 08:19:34 pyinotify==0.9.6 08:19:34 PyJWT==2.8.0 08:19:34 PyNaCl==1.5.0 08:19:34 pyparsing==2.4.7 08:19:34 pyperclip==1.8.2 08:19:34 pyrsistent==0.20.0 08:19:34 python-cinderclient==9.5.0 08:19:34 python-dateutil==2.9.0.post0 08:19:34 python-heatclient==3.5.0 08:19:34 python-jenkins==1.8.2 08:19:34 python-keystoneclient==5.4.0 08:19:34 python-magnumclient==4.4.0 08:19:34 python-novaclient==18.6.0 08:19:34 python-openstackclient==6.6.0 08:19:34 python-swiftclient==4.5.0 08:19:34 PyYAML==6.0.1 08:19:34 referencing==0.35.0 08:19:34 requests==2.31.0 08:19:34 requests-oauthlib==2.0.0 08:19:34 requestsexceptions==1.4.0 08:19:34 rfc3986==2.0.0 08:19:34 rpds-py==0.18.0 08:19:34 rsa==4.9 08:19:34 ruamel.yaml==0.18.6 08:19:34 ruamel.yaml.clib==0.2.8 08:19:34 s3transfer==0.10.1 08:19:34 simplejson==3.19.2 08:19:34 six==1.16.0 08:19:34 smmap==5.0.1 08:19:34 soupsieve==2.5 08:19:34 stevedore==5.2.0 08:19:34 tabulate==0.9.0 08:19:34 toml==0.10.2 08:19:34 tomlkit==0.12.4 08:19:34 tqdm==4.66.2 08:19:34 typing_extensions==4.11.0 08:19:34 tzdata==2024.1 08:19:34 urllib3==1.26.18 08:19:34 virtualenv==20.26.0 08:19:34 wcwidth==0.2.13 08:19:34 websocket-client==1.8.0 08:19:34 wrapt==1.16.0 08:19:34 xdg==6.0.0 08:19:34 xmltodict==0.13.0 08:19:34 yq==3.4.1 08:19:34 [EnvInject] - Injecting environment variables from a build step. 08:19:34 [EnvInject] - Injecting as environment variables the properties content 08:19:34 SET_JDK_VERSION=openjdk17 08:19:34 GIT_URL="git://cloud.onap.org/mirror" 08:19:34 08:19:34 [EnvInject] - Variables injected successfully. 08:19:34 [policy-pap-master-project-csit-pap] $ /bin/sh /tmp/jenkins2165708490221253392.sh 08:19:34 ---> update-java-alternatives.sh 08:19:34 ---> Updating Java version 08:19:34 ---> Ubuntu/Debian system detected 08:19:34 update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/java to provide /usr/bin/java (java) in manual mode 08:19:34 update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/javac to provide /usr/bin/javac (javac) in manual mode 08:19:34 update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64 to provide /usr/lib/jvm/java-openjdk (java_sdk_openjdk) in manual mode 08:19:34 openjdk version "17.0.4" 2022-07-19 08:19:34 OpenJDK Runtime Environment (build 17.0.4+8-Ubuntu-118.04) 08:19:34 OpenJDK 64-Bit Server VM (build 17.0.4+8-Ubuntu-118.04, mixed mode, sharing) 08:19:34 JAVA_HOME=/usr/lib/jvm/java-17-openjdk-amd64 08:19:34 [EnvInject] - Injecting environment variables from a build step. 08:19:34 [EnvInject] - Injecting as environment variables the properties file path '/tmp/java.env' 08:19:34 [EnvInject] - Variables injected successfully. 08:19:34 [policy-pap-master-project-csit-pap] $ /bin/sh -xe /tmp/jenkins8569520744217647814.sh 08:19:34 + /w/workspace/policy-pap-master-project-csit-pap/csit/run-project-csit.sh pap 08:19:34 + set +u 08:19:34 + save_set 08:19:34 + RUN_CSIT_SAVE_SET=ehxB 08:19:34 + RUN_CSIT_SHELLOPTS=braceexpand:errexit:hashall:interactive-comments:pipefail:xtrace 08:19:34 + '[' 1 -eq 0 ']' 08:19:34 + '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' 08:19:34 + export PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 08:19:34 + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 08:19:34 + export SCRIPTS=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts 08:19:34 + SCRIPTS=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts 08:19:34 + export ROBOT_VARIABLES= 08:19:34 + ROBOT_VARIABLES= 08:19:34 + export PROJECT=pap 08:19:34 + PROJECT=pap 08:19:34 + cd /w/workspace/policy-pap-master-project-csit-pap 08:19:34 + rm -rf /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap 08:19:34 + mkdir -p /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap 08:19:34 + source_safely /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/prepare-robot-env.sh 08:19:34 + '[' -z /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/prepare-robot-env.sh ']' 08:19:34 + relax_set 08:19:34 + set +e 08:19:34 + set +o pipefail 08:19:34 + . /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/prepare-robot-env.sh 08:19:34 ++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' 08:19:34 +++ mktemp -d 08:19:34 ++ ROBOT_VENV=/tmp/tmp.FET4DHaDjP 08:19:34 ++ echo ROBOT_VENV=/tmp/tmp.FET4DHaDjP 08:19:34 +++ python3 --version 08:19:34 ++ echo 'Python version is: Python 3.6.9' 08:19:34 Python version is: Python 3.6.9 08:19:34 ++ python3 -m venv --clear /tmp/tmp.FET4DHaDjP 08:19:36 ++ source /tmp/tmp.FET4DHaDjP/bin/activate 08:19:36 +++ deactivate nondestructive 08:19:36 +++ '[' -n '' ']' 08:19:36 +++ '[' -n '' ']' 08:19:36 +++ '[' -n /bin/bash -o -n '' ']' 08:19:36 +++ hash -r 08:19:36 +++ '[' -n '' ']' 08:19:36 +++ unset VIRTUAL_ENV 08:19:36 +++ '[' '!' nondestructive = nondestructive ']' 08:19:36 +++ VIRTUAL_ENV=/tmp/tmp.FET4DHaDjP 08:19:36 +++ export VIRTUAL_ENV 08:19:36 +++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 08:19:36 +++ PATH=/tmp/tmp.FET4DHaDjP/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 08:19:36 +++ export PATH 08:19:36 +++ '[' -n '' ']' 08:19:36 +++ '[' -z '' ']' 08:19:36 +++ _OLD_VIRTUAL_PS1= 08:19:36 +++ '[' 'x(tmp.FET4DHaDjP) ' '!=' x ']' 08:19:36 +++ PS1='(tmp.FET4DHaDjP) ' 08:19:36 +++ export PS1 08:19:36 +++ '[' -n /bin/bash -o -n '' ']' 08:19:36 +++ hash -r 08:19:36 ++ set -exu 08:19:36 ++ python3 -m pip install -qq --upgrade 'pip<=23.0' 'setuptools<=66.1.1' 08:19:39 ++ echo 'Installing Python Requirements' 08:19:39 Installing Python Requirements 08:19:39 ++ python3 -m pip install -qq -r /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/pylibs.txt 08:19:58 ++ python3 -m pip -qq freeze 08:19:58 bcrypt==4.0.1 08:19:58 beautifulsoup4==4.12.3 08:19:58 bitarray==2.9.2 08:19:58 certifi==2024.2.2 08:19:58 cffi==1.15.1 08:19:58 charset-normalizer==2.0.12 08:19:58 cryptography==40.0.2 08:19:58 decorator==5.1.1 08:19:58 elasticsearch==7.17.9 08:19:58 elasticsearch-dsl==7.4.1 08:19:58 enum34==1.1.10 08:19:58 idna==3.7 08:19:58 importlib-resources==5.4.0 08:19:58 ipaddr==2.2.0 08:19:58 isodate==0.6.1 08:19:58 jmespath==0.10.0 08:19:58 jsonpatch==1.32 08:19:58 jsonpath-rw==1.4.0 08:19:58 jsonpointer==2.3 08:19:58 lxml==5.2.1 08:19:58 netaddr==0.8.0 08:19:58 netifaces==0.11.0 08:19:58 odltools==0.1.28 08:19:58 paramiko==3.4.0 08:19:58 pkg_resources==0.0.0 08:19:58 ply==3.11 08:19:58 pyang==2.6.0 08:19:58 pyangbind==0.8.1 08:19:58 pycparser==2.21 08:19:58 pyhocon==0.3.60 08:19:58 PyNaCl==1.5.0 08:19:58 pyparsing==3.1.2 08:19:58 python-dateutil==2.9.0.post0 08:19:58 regex==2023.8.8 08:19:58 requests==2.27.1 08:19:58 robotframework==6.1.1 08:19:58 robotframework-httplibrary==0.4.2 08:19:58 robotframework-pythonlibcore==3.0.0 08:19:58 robotframework-requests==0.9.4 08:19:58 robotframework-selenium2library==3.0.0 08:19:58 robotframework-seleniumlibrary==5.1.3 08:19:58 robotframework-sshlibrary==3.8.0 08:19:58 scapy==2.5.0 08:19:58 scp==0.14.5 08:19:58 selenium==3.141.0 08:19:58 six==1.16.0 08:19:58 soupsieve==2.3.2.post1 08:19:58 urllib3==1.26.18 08:19:58 waitress==2.0.0 08:19:58 WebOb==1.8.7 08:19:58 WebTest==3.0.0 08:19:58 zipp==3.6.0 08:19:58 ++ mkdir -p /tmp/tmp.FET4DHaDjP/src/onap 08:19:58 ++ rm -rf /tmp/tmp.FET4DHaDjP/src/onap/testsuite 08:19:58 ++ python3 -m pip install -qq --upgrade --extra-index-url=https://nexus3.onap.org/repository/PyPi.staging/simple 'robotframework-onap==0.6.0.*' --pre 08:20:04 ++ echo 'Installing python confluent-kafka library' 08:20:04 Installing python confluent-kafka library 08:20:04 ++ python3 -m pip install -qq confluent-kafka 08:20:05 ++ echo 'Uninstall docker-py and reinstall docker.' 08:20:05 Uninstall docker-py and reinstall docker. 08:20:05 ++ python3 -m pip uninstall -y -qq docker 08:20:05 ++ python3 -m pip install -U -qq docker 08:20:07 ++ python3 -m pip -qq freeze 08:20:07 bcrypt==4.0.1 08:20:07 beautifulsoup4==4.12.3 08:20:07 bitarray==2.9.2 08:20:07 certifi==2024.2.2 08:20:07 cffi==1.15.1 08:20:07 charset-normalizer==2.0.12 08:20:07 confluent-kafka==2.3.0 08:20:07 cryptography==40.0.2 08:20:07 decorator==5.1.1 08:20:07 deepdiff==5.7.0 08:20:07 dnspython==2.2.1 08:20:07 docker==5.0.3 08:20:07 elasticsearch==7.17.9 08:20:07 elasticsearch-dsl==7.4.1 08:20:07 enum34==1.1.10 08:20:07 future==1.0.0 08:20:07 idna==3.7 08:20:07 importlib-resources==5.4.0 08:20:07 ipaddr==2.2.0 08:20:07 isodate==0.6.1 08:20:07 Jinja2==3.0.3 08:20:07 jmespath==0.10.0 08:20:07 jsonpatch==1.32 08:20:07 jsonpath-rw==1.4.0 08:20:07 jsonpointer==2.3 08:20:07 kafka-python==2.0.2 08:20:07 lxml==5.2.1 08:20:07 MarkupSafe==2.0.1 08:20:07 more-itertools==5.0.0 08:20:07 netaddr==0.8.0 08:20:07 netifaces==0.11.0 08:20:07 odltools==0.1.28 08:20:07 ordered-set==4.0.2 08:20:07 paramiko==3.4.0 08:20:07 pbr==6.0.0 08:20:07 pkg_resources==0.0.0 08:20:07 ply==3.11 08:20:07 protobuf==3.19.6 08:20:07 pyang==2.6.0 08:20:07 pyangbind==0.8.1 08:20:07 pycparser==2.21 08:20:07 pyhocon==0.3.60 08:20:07 PyNaCl==1.5.0 08:20:07 pyparsing==3.1.2 08:20:07 python-dateutil==2.9.0.post0 08:20:07 PyYAML==6.0.1 08:20:07 regex==2023.8.8 08:20:07 requests==2.27.1 08:20:07 robotframework==6.1.1 08:20:07 robotframework-httplibrary==0.4.2 08:20:07 robotframework-onap==0.6.0.dev105 08:20:07 robotframework-pythonlibcore==3.0.0 08:20:07 robotframework-requests==0.9.4 08:20:07 robotframework-selenium2library==3.0.0 08:20:07 robotframework-seleniumlibrary==5.1.3 08:20:07 robotframework-sshlibrary==3.8.0 08:20:07 robotlibcore-temp==1.0.2 08:20:07 scapy==2.5.0 08:20:07 scp==0.14.5 08:20:07 selenium==3.141.0 08:20:07 six==1.16.0 08:20:07 soupsieve==2.3.2.post1 08:20:07 urllib3==1.26.18 08:20:07 waitress==2.0.0 08:20:07 WebOb==1.8.7 08:20:07 websocket-client==1.3.1 08:20:07 WebTest==3.0.0 08:20:07 zipp==3.6.0 08:20:07 ++ uname 08:20:07 ++ grep -q Linux 08:20:07 ++ sudo apt-get -y -qq install libxml2-utils 08:20:07 + load_set 08:20:07 + _setopts=ehuxB 08:20:07 ++ echo braceexpand:hashall:interactive-comments:nounset:xtrace 08:20:07 ++ tr : ' ' 08:20:07 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 08:20:07 + set +o braceexpand 08:20:07 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 08:20:07 + set +o hashall 08:20:07 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 08:20:07 + set +o interactive-comments 08:20:07 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 08:20:07 + set +o nounset 08:20:07 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 08:20:07 + set +o xtrace 08:20:07 ++ sed 's/./& /g' 08:20:07 ++ echo ehuxB 08:20:07 + for i in $(echo "$_setopts" | sed 's/./& /g') 08:20:07 + set +e 08:20:07 + for i in $(echo "$_setopts" | sed 's/./& /g') 08:20:07 + set +h 08:20:07 + for i in $(echo "$_setopts" | sed 's/./& /g') 08:20:07 + set +u 08:20:07 + for i in $(echo "$_setopts" | sed 's/./& /g') 08:20:07 + set +x 08:20:07 + source_safely /tmp/tmp.FET4DHaDjP/bin/activate 08:20:07 + '[' -z /tmp/tmp.FET4DHaDjP/bin/activate ']' 08:20:07 + relax_set 08:20:07 + set +e 08:20:07 + set +o pipefail 08:20:07 + . /tmp/tmp.FET4DHaDjP/bin/activate 08:20:07 ++ deactivate nondestructive 08:20:07 ++ '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin ']' 08:20:07 ++ PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 08:20:07 ++ export PATH 08:20:07 ++ unset _OLD_VIRTUAL_PATH 08:20:07 ++ '[' -n '' ']' 08:20:07 ++ '[' -n /bin/bash -o -n '' ']' 08:20:07 ++ hash -r 08:20:07 ++ '[' -n '' ']' 08:20:07 ++ unset VIRTUAL_ENV 08:20:07 ++ '[' '!' nondestructive = nondestructive ']' 08:20:07 ++ VIRTUAL_ENV=/tmp/tmp.FET4DHaDjP 08:20:07 ++ export VIRTUAL_ENV 08:20:07 ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 08:20:07 ++ PATH=/tmp/tmp.FET4DHaDjP/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 08:20:07 ++ export PATH 08:20:07 ++ '[' -n '' ']' 08:20:07 ++ '[' -z '' ']' 08:20:07 ++ _OLD_VIRTUAL_PS1='(tmp.FET4DHaDjP) ' 08:20:07 ++ '[' 'x(tmp.FET4DHaDjP) ' '!=' x ']' 08:20:07 ++ PS1='(tmp.FET4DHaDjP) (tmp.FET4DHaDjP) ' 08:20:07 ++ export PS1 08:20:07 ++ '[' -n /bin/bash -o -n '' ']' 08:20:07 ++ hash -r 08:20:07 + load_set 08:20:07 + _setopts=hxB 08:20:07 ++ echo braceexpand:hashall:interactive-comments:xtrace 08:20:07 ++ tr : ' ' 08:20:07 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 08:20:07 + set +o braceexpand 08:20:07 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 08:20:07 + set +o hashall 08:20:07 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 08:20:07 + set +o interactive-comments 08:20:07 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 08:20:07 + set +o xtrace 08:20:07 ++ echo hxB 08:20:07 ++ sed 's/./& /g' 08:20:07 + for i in $(echo "$_setopts" | sed 's/./& /g') 08:20:07 + set +h 08:20:07 + for i in $(echo "$_setopts" | sed 's/./& /g') 08:20:07 + set +x 08:20:07 + export TEST_PLAN_DIR=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests 08:20:07 + TEST_PLAN_DIR=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests 08:20:07 + export TEST_OPTIONS= 08:20:07 + TEST_OPTIONS= 08:20:07 ++ mktemp -d 08:20:07 + WORKDIR=/tmp/tmp.2rpNlazw2W 08:20:07 + cd /tmp/tmp.2rpNlazw2W 08:20:07 + docker login -u docker -p docker nexus3.onap.org:10001 08:20:08 WARNING! Using --password via the CLI is insecure. Use --password-stdin. 08:20:08 WARNING! Your password will be stored unencrypted in /home/jenkins/.docker/config.json. 08:20:08 Configure a credential helper to remove this warning. See 08:20:08 https://docs.docker.com/engine/reference/commandline/login/#credentials-store 08:20:08 08:20:08 Login Succeeded 08:20:08 + SETUP=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh 08:20:08 + '[' -f /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh ']' 08:20:08 + echo 'Running setup script /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh' 08:20:08 Running setup script /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh 08:20:08 + source_safely /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh 08:20:08 + '[' -z /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh ']' 08:20:08 + relax_set 08:20:08 + set +e 08:20:08 + set +o pipefail 08:20:08 + . /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh 08:20:08 ++ source /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/node-templates.sh 08:20:08 +++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' 08:20:08 ++++ awk -F= '$1 == "defaultbranch" { print $2 }' /w/workspace/policy-pap-master-project-csit-pap/.gitreview 08:20:08 +++ GERRIT_BRANCH=master 08:20:08 +++ echo GERRIT_BRANCH=master 08:20:08 GERRIT_BRANCH=master 08:20:08 +++ rm -rf /w/workspace/policy-pap-master-project-csit-pap/models 08:20:08 +++ mkdir /w/workspace/policy-pap-master-project-csit-pap/models 08:20:08 +++ git clone -b master --single-branch https://github.com/onap/policy-models.git /w/workspace/policy-pap-master-project-csit-pap/models 08:20:08 Cloning into '/w/workspace/policy-pap-master-project-csit-pap/models'... 08:20:09 +++ export DATA=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies 08:20:09 +++ DATA=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies 08:20:09 +++ export NODETEMPLATES=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates 08:20:09 +++ NODETEMPLATES=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates 08:20:09 +++ sed -e 's!Measurement_vGMUX!ADifferentValue!' /w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies/vCPE.policy.monitoring.input.tosca.json 08:20:09 +++ sed -e 's!"version": "1.0.0"!"version": "2.0.0"!' -e 's!"policy-version": 1!"policy-version": 2!' /w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies/vCPE.policy.monitoring.input.tosca.json 08:20:09 ++ source /w/workspace/policy-pap-master-project-csit-pap/compose/start-compose.sh apex-pdp --grafana 08:20:09 +++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' 08:20:09 +++ COMPOSE_FOLDER=/w/workspace/policy-pap-master-project-csit-pap/compose 08:20:09 +++ grafana=false 08:20:09 +++ gui=false 08:20:09 +++ [[ 2 -gt 0 ]] 08:20:09 +++ key=apex-pdp 08:20:09 +++ case $key in 08:20:09 +++ echo apex-pdp 08:20:09 apex-pdp 08:20:09 +++ component=apex-pdp 08:20:09 +++ shift 08:20:09 +++ [[ 1 -gt 0 ]] 08:20:09 +++ key=--grafana 08:20:09 +++ case $key in 08:20:09 +++ grafana=true 08:20:09 +++ shift 08:20:09 +++ [[ 0 -gt 0 ]] 08:20:09 +++ cd /w/workspace/policy-pap-master-project-csit-pap/compose 08:20:09 +++ echo 'Configuring docker compose...' 08:20:09 Configuring docker compose... 08:20:09 +++ source export-ports.sh 08:20:09 +++ source get-versions.sh 08:20:12 +++ '[' -z pap ']' 08:20:12 +++ '[' -n apex-pdp ']' 08:20:12 +++ '[' apex-pdp == logs ']' 08:20:12 +++ '[' true = true ']' 08:20:12 +++ echo 'Starting apex-pdp application with Grafana' 08:20:12 Starting apex-pdp application with Grafana 08:20:12 +++ docker-compose up -d apex-pdp grafana 08:20:13 Creating network "compose_default" with the default driver 08:20:13 Pulling prometheus (nexus3.onap.org:10001/prom/prometheus:latest)... 08:20:13 latest: Pulling from prom/prometheus 08:20:16 Digest: sha256:4f6c47e39a9064028766e8c95890ed15690c30f00c4ba14e7ce6ae1ded0295b1 08:20:16 Status: Downloaded newer image for nexus3.onap.org:10001/prom/prometheus:latest 08:20:16 Pulling grafana (nexus3.onap.org:10001/grafana/grafana:latest)... 08:20:16 latest: Pulling from grafana/grafana 08:20:24 Digest: sha256:7d5faae481a4c6f436c99e98af11534f7fd5e8d3e35213552dd1dd02bc393d2e 08:20:24 Status: Downloaded newer image for nexus3.onap.org:10001/grafana/grafana:latest 08:20:24 Pulling mariadb (nexus3.onap.org:10001/mariadb:10.10.2)... 08:20:25 10.10.2: Pulling from mariadb 08:20:30 Digest: sha256:bfc25a68e113de43d0d112f5a7126df8e278579c3224e3923359e1c1d8d5ce6e 08:20:30 Status: Downloaded newer image for nexus3.onap.org:10001/mariadb:10.10.2 08:20:30 Pulling simulator (nexus3.onap.org:10001/onap/policy-models-simulator:3.1.2-SNAPSHOT)... 08:20:31 3.1.2-SNAPSHOT: Pulling from onap/policy-models-simulator 08:20:34 Digest: sha256:8c393534de923b51cd2c2937210a65f4f06f457c0dff40569dd547e5429385c8 08:20:34 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-models-simulator:3.1.2-SNAPSHOT 08:20:34 Pulling zookeeper (confluentinc/cp-zookeeper:latest)... 08:20:35 latest: Pulling from confluentinc/cp-zookeeper 08:20:45 Digest: sha256:4dc780642bfc5ec3a2d4901e2ff1f9ddef7f7c5c0b793e1e2911cbfb4e3a3214 08:20:45 Status: Downloaded newer image for confluentinc/cp-zookeeper:latest 08:20:45 Pulling kafka (confluentinc/cp-kafka:latest)... 08:20:45 latest: Pulling from confluentinc/cp-kafka 08:20:50 Digest: sha256:620734d9fc0bb1f9886932e5baf33806074469f40e3fe246a3fdbb59309535fa 08:20:50 Status: Downloaded newer image for confluentinc/cp-kafka:latest 08:20:50 Pulling policy-db-migrator (nexus3.onap.org:10001/onap/policy-db-migrator:3.1.2-SNAPSHOT)... 08:20:50 3.1.2-SNAPSHOT: Pulling from onap/policy-db-migrator 08:20:53 Digest: sha256:6c43c624b12507ad4db9e9629273366fa843a4406dbb129d263c111145911791 08:20:53 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-db-migrator:3.1.2-SNAPSHOT 08:20:53 Pulling api (nexus3.onap.org:10001/onap/policy-api:3.1.2-SNAPSHOT)... 08:20:53 3.1.2-SNAPSHOT: Pulling from onap/policy-api 08:20:54 Digest: sha256:1dd97a95f6bcae15ec35d9d2c6a96d034d97ff5ce2273cf42b1c2549092a92a2 08:20:54 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-api:3.1.2-SNAPSHOT 08:20:54 Pulling pap (nexus3.onap.org:10001/onap/policy-pap:3.1.2-SNAPSHOT)... 08:20:54 3.1.2-SNAPSHOT: Pulling from onap/policy-pap 08:20:58 Digest: sha256:eb3daea3b81a46c89d44f314f21edba0e1d1b0915fd599185530e673a4f3e30f 08:20:58 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-pap:3.1.2-SNAPSHOT 08:20:58 Pulling apex-pdp (nexus3.onap.org:10001/onap/policy-apex-pdp:3.1.2-SNAPSHOT)... 08:20:58 3.1.2-SNAPSHOT: Pulling from onap/policy-apex-pdp 08:21:16 Digest: sha256:15db3ed25bc2c5fcac7635cebf8ee909afbd4fd846efff231410c6f1346614e7 08:21:17 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-apex-pdp:3.1.2-SNAPSHOT 08:21:18 Creating prometheus ... 08:21:18 Creating mariadb ... 08:21:18 Creating simulator ... 08:21:18 Creating zookeeper ... 08:21:27 Creating mariadb ... done 08:21:27 Creating policy-db-migrator ... 08:21:28 Creating policy-db-migrator ... done 08:21:28 Creating policy-api ... 08:21:28 Creating policy-api ... done 08:21:29 Creating zookeeper ... done 08:21:29 Creating kafka ... 08:21:30 Creating kafka ... done 08:21:30 Creating policy-pap ... 08:21:31 Creating policy-pap ... done 08:21:32 Creating prometheus ... done 08:21:32 Creating grafana ... 08:21:33 Creating grafana ... done 08:21:34 Creating simulator ... done 08:21:34 Creating policy-apex-pdp ... 08:21:35 Creating policy-apex-pdp ... done 08:21:35 +++ echo 'Prometheus server: http://localhost:30259' 08:21:35 Prometheus server: http://localhost:30259 08:21:35 +++ echo 'Grafana server: http://localhost:30269' 08:21:35 Grafana server: http://localhost:30269 08:21:35 +++ cd /w/workspace/policy-pap-master-project-csit-pap 08:21:35 ++ sleep 10 08:21:45 ++ unset http_proxy https_proxy 08:21:45 ++ bash /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/wait_for_rest.sh localhost 30003 08:21:45 Waiting for REST to come up on localhost port 30003... 08:21:45 NAMES STATUS 08:21:45 policy-apex-pdp Up 10 seconds 08:21:45 grafana Up 12 seconds 08:21:45 policy-pap Up 14 seconds 08:21:45 kafka Up 15 seconds 08:21:45 policy-api Up 16 seconds 08:21:45 zookeeper Up 16 seconds 08:21:45 simulator Up 11 seconds 08:21:45 prometheus Up 13 seconds 08:21:45 mariadb Up 18 seconds 08:21:50 NAMES STATUS 08:21:50 policy-apex-pdp Up 15 seconds 08:21:50 grafana Up 17 seconds 08:21:50 policy-pap Up 19 seconds 08:21:50 kafka Up 20 seconds 08:21:50 policy-api Up 21 seconds 08:21:50 zookeeper Up 21 seconds 08:21:50 simulator Up 16 seconds 08:21:50 prometheus Up 18 seconds 08:21:50 mariadb Up 23 seconds 08:21:55 NAMES STATUS 08:21:55 policy-apex-pdp Up 20 seconds 08:21:55 grafana Up 22 seconds 08:21:55 policy-pap Up 24 seconds 08:21:55 kafka Up 25 seconds 08:21:55 policy-api Up 26 seconds 08:21:55 zookeeper Up 26 seconds 08:21:55 simulator Up 21 seconds 08:21:55 prometheus Up 23 seconds 08:21:55 mariadb Up 28 seconds 08:22:00 NAMES STATUS 08:22:00 policy-apex-pdp Up 25 seconds 08:22:00 grafana Up 27 seconds 08:22:00 policy-pap Up 29 seconds 08:22:00 kafka Up 30 seconds 08:22:00 policy-api Up 31 seconds 08:22:00 zookeeper Up 31 seconds 08:22:00 simulator Up 26 seconds 08:22:00 prometheus Up 28 seconds 08:22:00 mariadb Up 33 seconds 08:22:05 NAMES STATUS 08:22:05 policy-apex-pdp Up 30 seconds 08:22:05 grafana Up 32 seconds 08:22:05 policy-pap Up 34 seconds 08:22:05 kafka Up 35 seconds 08:22:05 policy-api Up 36 seconds 08:22:05 zookeeper Up 36 seconds 08:22:05 simulator Up 31 seconds 08:22:05 prometheus Up 33 seconds 08:22:05 mariadb Up 38 seconds 08:22:05 ++ export 'SUITES=pap-test.robot 08:22:05 pap-slas.robot' 08:22:05 ++ SUITES='pap-test.robot 08:22:05 pap-slas.robot' 08:22:05 ++ ROBOT_VARIABLES='-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 08:22:05 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates' 08:22:05 + load_set 08:22:05 + _setopts=hxB 08:22:05 ++ echo braceexpand:hashall:interactive-comments:xtrace 08:22:05 ++ tr : ' ' 08:22:05 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 08:22:05 + set +o braceexpand 08:22:05 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 08:22:05 + set +o hashall 08:22:05 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 08:22:05 + set +o interactive-comments 08:22:05 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 08:22:05 + set +o xtrace 08:22:05 ++ echo hxB 08:22:05 ++ sed 's/./& /g' 08:22:05 + for i in $(echo "$_setopts" | sed 's/./& /g') 08:22:05 + set +h 08:22:05 + for i in $(echo "$_setopts" | sed 's/./& /g') 08:22:05 + set +x 08:22:05 + docker_stats 08:22:05 + tee /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap/_sysinfo-1-after-setup.txt 08:22:05 ++ uname -s 08:22:05 + '[' Linux == Darwin ']' 08:22:05 + sh -c 'top -bn1 | head -3' 08:22:05 top - 08:22:05 up 4 min, 0 users, load average: 3.19, 1.36, 0.54 08:22:05 Tasks: 210 total, 1 running, 131 sleeping, 0 stopped, 0 zombie 08:22:05 %Cpu(s): 13.5 us, 2.8 sy, 0.0 ni, 79.4 id, 4.1 wa, 0.0 hi, 0.1 si, 0.1 st 08:22:05 + echo 08:22:05 + sh -c 'free -h' 08:22:05 08:22:05 total used free shared buff/cache available 08:22:05 Mem: 31G 2.6G 22G 1.3M 6.0G 28G 08:22:05 Swap: 1.0G 0B 1.0G 08:22:05 + echo 08:22:05 + docker ps --format 'table {{ .Names }}\t{{ .Status }}' 08:22:05 08:22:05 NAMES STATUS 08:22:05 policy-apex-pdp Up 30 seconds 08:22:05 grafana Up 32 seconds 08:22:05 policy-pap Up 34 seconds 08:22:05 kafka Up 35 seconds 08:22:05 policy-api Up 36 seconds 08:22:05 zookeeper Up 36 seconds 08:22:05 simulator Up 31 seconds 08:22:05 prometheus Up 33 seconds 08:22:05 mariadb Up 38 seconds 08:22:05 + echo 08:22:05 + docker stats --no-stream 08:22:05 08:22:08 CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 08:22:08 46b83cfe537e policy-apex-pdp 1.88% 185.4MiB / 31.41GiB 0.58% 7.14kB / 6.7kB 0B / 0B 48 08:22:08 63ef1719939e grafana 0.06% 53.22MiB / 31.41GiB 0.17% 18.8kB / 3.31kB 0B / 24.9MB 18 08:22:08 4332c31c3362 policy-pap 18.22% 489MiB / 31.41GiB 1.52% 33.7kB / 35.6kB 0B / 149MB 64 08:22:08 7c7374bf05f8 kafka 57.81% 383.8MiB / 31.41GiB 1.19% 68.6kB / 71.8kB 0B / 475kB 84 08:22:08 c589a517bbf1 policy-api 0.10% 451.5MiB / 31.41GiB 1.40% 988kB / 646kB 0B / 0B 52 08:22:08 09fae81f821c zookeeper 0.10% 95.71MiB / 31.41GiB 0.30% 52.3kB / 45.6kB 0B / 414kB 60 08:22:08 4a3fd8a3bc78 simulator 0.08% 119.9MiB / 31.41GiB 0.37% 1.15kB / 0B 0B / 0B 76 08:22:08 353097dba0ba prometheus 0.02% 18.37MiB / 31.41GiB 0.06% 1.52kB / 474B 225kB / 0B 13 08:22:08 7d1448dfd828 mariadb 0.02% 102.3MiB / 31.41GiB 0.32% 935kB / 1.18MB 11MB / 60.8MB 37 08:22:08 + echo 08:22:08 08:22:08 + cd /tmp/tmp.2rpNlazw2W 08:22:08 + echo 'Reading the testplan:' 08:22:08 Reading the testplan: 08:22:08 + echo 'pap-test.robot 08:22:08 pap-slas.robot' 08:22:08 + egrep -v '(^[[:space:]]*#|^[[:space:]]*$)' 08:22:08 + sed 's|^|/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/|' 08:22:08 + cat testplan.txt 08:22:08 /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot 08:22:08 /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot 08:22:08 ++ xargs 08:22:08 + SUITES='/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot' 08:22:08 + echo 'ROBOT_VARIABLES=-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 08:22:08 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates' 08:22:08 ROBOT_VARIABLES=-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 08:22:08 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates 08:22:08 + echo 'Starting Robot test suites /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot ...' 08:22:08 Starting Robot test suites /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot ... 08:22:08 + relax_set 08:22:08 + set +e 08:22:08 + set +o pipefail 08:22:08 + python3 -m robot.run -N pap -v WORKSPACE:/tmp -v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot 08:22:08 ============================================================================== 08:22:08 pap 08:22:08 ============================================================================== 08:22:08 pap.Pap-Test 08:22:08 ============================================================================== 08:22:09 LoadPolicy :: Create a policy named 'onap.restart.tca' and version... | PASS | 08:22:09 ------------------------------------------------------------------------------ 08:22:10 LoadPolicyWithMetadataSet :: Create a policy named 'operational.ap... | PASS | 08:22:10 ------------------------------------------------------------------------------ 08:22:10 LoadNodeTemplates :: Create node templates in database using speci... | PASS | 08:22:10 ------------------------------------------------------------------------------ 08:22:11 Healthcheck :: Verify policy pap health check | PASS | 08:22:11 ------------------------------------------------------------------------------ 08:22:31 Consolidated Healthcheck :: Verify policy consolidated health check | PASS | 08:22:31 ------------------------------------------------------------------------------ 08:22:31 Metrics :: Verify policy pap is exporting prometheus metrics | PASS | 08:22:31 ------------------------------------------------------------------------------ 08:22:32 AddPdpGroup :: Add a new PdpGroup named 'testGroup' in the policy ... | PASS | 08:22:32 ------------------------------------------------------------------------------ 08:22:32 QueryPdpGroupsBeforeActivation :: Verify PdpGroups before activation | PASS | 08:22:32 ------------------------------------------------------------------------------ 08:22:32 ActivatePdpGroup :: Change the state of PdpGroup named 'testGroup'... | PASS | 08:22:32 ------------------------------------------------------------------------------ 08:22:32 QueryPdpGroupsAfterActivation :: Verify PdpGroups after activation | PASS | 08:22:32 ------------------------------------------------------------------------------ 08:22:33 DeployPdpGroups :: Deploy policies in PdpGroups | PASS | 08:22:33 ------------------------------------------------------------------------------ 08:22:33 QueryPdpGroupsAfterDeploy :: Verify PdpGroups after undeploy | PASS | 08:22:33 ------------------------------------------------------------------------------ 08:22:33 QueryPolicyAuditAfterDeploy :: Verify policy audit record after de... | PASS | 08:22:33 ------------------------------------------------------------------------------ 08:22:33 QueryPolicyAuditWithMetadataSetAfterDeploy :: Verify policy audit ... | PASS | 08:22:33 ------------------------------------------------------------------------------ 08:22:33 UndeployPolicy :: Undeploy a policy named 'onap.restart.tca' from ... | PASS | 08:22:33 ------------------------------------------------------------------------------ 08:22:34 UndeployPolicyWithMetadataSet :: Undeploy a policy named 'operatio... | PASS | 08:22:34 ------------------------------------------------------------------------------ 08:22:34 QueryPdpGroupsAfterUndeploy :: Verify PdpGroups after undeploy | PASS | 08:22:34 ------------------------------------------------------------------------------ 08:22:54 QueryPolicyAuditAfterUnDeploy :: Verify policy audit record after ... | FAIL | 08:22:54 pdpTypeC != pdpTypeA 08:22:54 ------------------------------------------------------------------------------ 08:22:54 QueryPolicyAuditWithMetadataSetAfterUnDeploy :: Verify policy audi... | PASS | 08:22:54 ------------------------------------------------------------------------------ 08:22:54 DeactivatePdpGroup :: Change the state of PdpGroup named 'testGrou... | PASS | 08:22:54 ------------------------------------------------------------------------------ 08:22:54 DeletePdpGroups :: Delete the PdpGroup named 'testGroup' from poli... | PASS | 08:22:54 ------------------------------------------------------------------------------ 08:22:54 QueryPdpGroupsAfterDelete :: Verify PdpGroups after delete | PASS | 08:22:54 ------------------------------------------------------------------------------ 08:22:54 pap.Pap-Test | FAIL | 08:22:54 22 tests, 21 passed, 1 failed 08:22:54 ============================================================================== 08:22:54 pap.Pap-Slas 08:22:54 ============================================================================== 08:23:54 WaitForPrometheusServer :: Wait for Prometheus server to gather al... | PASS | 08:23:54 ------------------------------------------------------------------------------ 08:23:54 ValidateResponseTimeForHealthcheck :: Validate component healthche... | PASS | 08:23:54 ------------------------------------------------------------------------------ 08:23:54 ValidateResponseTimeForSystemHealthcheck :: Validate if system hea... | PASS | 08:23:54 ------------------------------------------------------------------------------ 08:23:54 ValidateResponseTimeQueryPolicyAudit :: Validate query audits resp... | PASS | 08:23:54 ------------------------------------------------------------------------------ 08:23:54 ValidateResponseTimeUpdateGroup :: Validate pdps/group response time | PASS | 08:23:54 ------------------------------------------------------------------------------ 08:23:55 ValidatePolicyDeploymentTime :: Check if deployment of policy is u... | PASS | 08:23:55 ------------------------------------------------------------------------------ 08:23:55 ValidateResponseTimeDeletePolicy :: Check if undeployment of polic... | PASS | 08:23:55 ------------------------------------------------------------------------------ 08:23:55 ValidateResponseTimeDeleteGroup :: Validate delete group response ... | PASS | 08:23:55 ------------------------------------------------------------------------------ 08:23:55 pap.Pap-Slas | PASS | 08:23:55 8 tests, 8 passed, 0 failed 08:23:55 ============================================================================== 08:23:55 pap | FAIL | 08:23:55 30 tests, 29 passed, 1 failed 08:23:55 ============================================================================== 08:23:55 Output: /tmp/tmp.2rpNlazw2W/output.xml 08:23:55 Log: /tmp/tmp.2rpNlazw2W/log.html 08:23:55 Report: /tmp/tmp.2rpNlazw2W/report.html 08:23:55 + RESULT=1 08:23:55 + load_set 08:23:55 + _setopts=hxB 08:23:55 ++ echo braceexpand:hashall:interactive-comments:xtrace 08:23:55 ++ tr : ' ' 08:23:55 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 08:23:55 + set +o braceexpand 08:23:55 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 08:23:55 + set +o hashall 08:23:55 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 08:23:55 + set +o interactive-comments 08:23:55 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 08:23:55 + set +o xtrace 08:23:55 ++ echo hxB 08:23:55 ++ sed 's/./& /g' 08:23:55 + for i in $(echo "$_setopts" | sed 's/./& /g') 08:23:55 + set +h 08:23:55 + for i in $(echo "$_setopts" | sed 's/./& /g') 08:23:55 + set +x 08:23:55 + echo 'RESULT: 1' 08:23:55 RESULT: 1 08:23:55 + exit 1 08:23:55 + on_exit 08:23:55 + rc=1 08:23:55 + [[ -n /w/workspace/policy-pap-master-project-csit-pap ]] 08:23:55 + docker ps --format 'table {{ .Names }}\t{{ .Status }}' 08:23:55 NAMES STATUS 08:23:55 policy-apex-pdp Up 2 minutes 08:23:55 grafana Up 2 minutes 08:23:55 policy-pap Up 2 minutes 08:23:55 kafka Up 2 minutes 08:23:55 policy-api Up 2 minutes 08:23:55 zookeeper Up 2 minutes 08:23:55 simulator Up 2 minutes 08:23:55 prometheus Up 2 minutes 08:23:55 mariadb Up 2 minutes 08:23:55 + docker_stats 08:23:55 ++ uname -s 08:23:55 + '[' Linux == Darwin ']' 08:23:55 + sh -c 'top -bn1 | head -3' 08:23:55 top - 08:23:55 up 6 min, 0 users, load average: 0.63, 1.02, 0.51 08:23:55 Tasks: 197 total, 1 running, 129 sleeping, 0 stopped, 0 zombie 08:23:55 %Cpu(s): 10.7 us, 2.1 sy, 0.0 ni, 83.9 id, 3.1 wa, 0.0 hi, 0.1 si, 0.1 st 08:23:55 + echo 08:23:55 08:23:55 + sh -c 'free -h' 08:23:55 total used free shared buff/cache available 08:23:55 Mem: 31G 2.6G 22G 1.3M 6.0G 28G 08:23:55 Swap: 1.0G 0B 1.0G 08:23:55 + echo 08:23:55 08:23:55 + docker ps --format 'table {{ .Names }}\t{{ .Status }}' 08:23:55 NAMES STATUS 08:23:55 policy-apex-pdp Up 2 minutes 08:23:55 grafana Up 2 minutes 08:23:55 policy-pap Up 2 minutes 08:23:55 kafka Up 2 minutes 08:23:55 policy-api Up 2 minutes 08:23:55 zookeeper Up 2 minutes 08:23:55 simulator Up 2 minutes 08:23:55 prometheus Up 2 minutes 08:23:55 mariadb Up 2 minutes 08:23:55 + echo 08:23:55 08:23:55 + docker stats --no-stream 08:23:57 CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 08:23:57 46b83cfe537e policy-apex-pdp 0.79% 180.2MiB / 31.41GiB 0.56% 56.7kB / 91kB 0B / 0B 52 08:23:57 63ef1719939e grafana 0.03% 56.63MiB / 31.41GiB 0.18% 19.9kB / 4.5kB 0B / 24.9MB 18 08:23:57 4332c31c3362 policy-pap 0.52% 472.2MiB / 31.41GiB 1.47% 2.47MB / 1.05MB 0B / 149MB 66 08:23:57 7c7374bf05f8 kafka 9.77% 391.7MiB / 31.41GiB 1.22% 237kB / 213kB 0B / 573kB 85 08:23:57 c589a517bbf1 policy-api 0.09% 454.9MiB / 31.41GiB 1.41% 2.45MB / 1.1MB 0B / 0B 55 08:23:57 09fae81f821c zookeeper 0.10% 96.68MiB / 31.41GiB 0.30% 55.2kB / 47.2kB 0B / 414kB 60 08:23:57 4a3fd8a3bc78 simulator 0.07% 120MiB / 31.41GiB 0.37% 1.37kB / 0B 0B / 0B 78 08:23:57 353097dba0ba prometheus 0.02% 24.3MiB / 31.41GiB 0.08% 191kB / 11.1kB 225kB / 0B 13 08:23:57 7d1448dfd828 mariadb 0.01% 103.5MiB / 31.41GiB 0.32% 2.02MB / 4.87MB 11MB / 61MB 28 08:23:57 + echo 08:23:57 08:23:57 + source_safely /w/workspace/policy-pap-master-project-csit-pap/compose/stop-compose.sh 08:23:57 + '[' -z /w/workspace/policy-pap-master-project-csit-pap/compose/stop-compose.sh ']' 08:23:57 + relax_set 08:23:57 + set +e 08:23:57 + set +o pipefail 08:23:57 + . /w/workspace/policy-pap-master-project-csit-pap/compose/stop-compose.sh 08:23:57 ++ echo 'Shut down started!' 08:23:57 Shut down started! 08:23:57 ++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' 08:23:57 ++ COMPOSE_FOLDER=/w/workspace/policy-pap-master-project-csit-pap/compose 08:23:57 ++ cd /w/workspace/policy-pap-master-project-csit-pap/compose 08:23:57 ++ source export-ports.sh 08:23:58 ++ source get-versions.sh 08:24:00 ++ echo 'Collecting logs from docker compose containers...' 08:24:00 Collecting logs from docker compose containers... 08:24:00 ++ docker-compose logs 08:24:02 ++ cat docker_compose.log 08:24:02 Attaching to policy-apex-pdp, grafana, policy-pap, kafka, policy-api, policy-db-migrator, zookeeper, simulator, prometheus, mariadb 08:24:02 grafana | logger=settings t=2024-04-26T08:21:33.556913742Z level=info msg="Starting Grafana" version=10.4.2 commit=701c851be7a930e04fbc6ebb1cd4254da80edd4c branch=v10.4.x compiled=2024-04-26T08:21:33Z 08:24:02 grafana | logger=settings t=2024-04-26T08:21:33.557173965Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini 08:24:02 grafana | logger=settings t=2024-04-26T08:21:33.557187436Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini 08:24:02 grafana | logger=settings t=2024-04-26T08:21:33.557191106Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana" 08:24:02 grafana | logger=settings t=2024-04-26T08:21:33.557194436Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana" 08:24:02 grafana | logger=settings t=2024-04-26T08:21:33.557208387Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins" 08:24:02 grafana | logger=settings t=2024-04-26T08:21:33.557212687Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning" 08:24:02 grafana | logger=settings t=2024-04-26T08:21:33.557215797Z level=info msg="Config overridden from command line" arg="default.log.mode=console" 08:24:02 grafana | logger=settings t=2024-04-26T08:21:33.557219157Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana" 08:24:02 grafana | logger=settings t=2024-04-26T08:21:33.557222297Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana" 08:24:02 grafana | logger=settings t=2024-04-26T08:21:33.557226697Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins" 08:24:02 grafana | logger=settings t=2024-04-26T08:21:33.557229728Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning" 08:24:02 grafana | logger=settings t=2024-04-26T08:21:33.557232828Z level=info msg=Target target=[all] 08:24:02 grafana | logger=settings t=2024-04-26T08:21:33.557239218Z level=info msg="Path Home" path=/usr/share/grafana 08:24:02 grafana | logger=settings t=2024-04-26T08:21:33.557242148Z level=info msg="Path Data" path=/var/lib/grafana 08:24:02 grafana | logger=settings t=2024-04-26T08:21:33.557247148Z level=info msg="Path Logs" path=/var/log/grafana 08:24:02 grafana | logger=settings t=2024-04-26T08:21:33.557250139Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins 08:24:02 grafana | logger=settings t=2024-04-26T08:21:33.557253079Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning 08:24:02 grafana | logger=settings t=2024-04-26T08:21:33.557256069Z level=info msg="App mode production" 08:24:02 grafana | logger=sqlstore t=2024-04-26T08:21:33.557566995Z level=info msg="Connecting to DB" dbtype=sqlite3 08:24:02 grafana | logger=sqlstore t=2024-04-26T08:21:33.557588487Z level=info msg="Creating SQLite database file" path=/var/lib/grafana/grafana.db 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:33.558356656Z level=info msg="Starting DB migrations" 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:33.559359177Z level=info msg="Executing migration" id="create migration_log table" 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:33.560169068Z level=info msg="Migration successfully executed" id="create migration_log table" duration=809.821µs 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:33.563470568Z level=info msg="Executing migration" id="create user table" 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:33.564072279Z level=info msg="Migration successfully executed" id="create user table" duration=601.5µs 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:33.568235552Z level=info msg="Executing migration" id="add unique index user.login" 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:33.56897602Z level=info msg="Migration successfully executed" id="add unique index user.login" duration=740.518µs 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:33.574769267Z level=info msg="Executing migration" id="add unique index user.email" 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:33.575849342Z level=info msg="Migration successfully executed" id="add unique index user.email" duration=1.073176ms 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:33.579186273Z level=info msg="Executing migration" id="drop index UQE_user_login - v1" 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:33.580183294Z level=info msg="Migration successfully executed" id="drop index UQE_user_login - v1" duration=997.041µs 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:33.583559848Z level=info msg="Executing migration" id="drop index UQE_user_email - v1" 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:33.584212431Z level=info msg="Migration successfully executed" id="drop index UQE_user_email - v1" duration=653.684µs 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:33.59493205Z level=info msg="Executing migration" id="Rename table user to user_v1 - v1" 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:33.598611929Z level=info msg="Migration successfully executed" id="Rename table user to user_v1 - v1" duration=3.679318ms 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:33.60294325Z level=info msg="Executing migration" id="create user table v2" 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:33.603820086Z level=info msg="Migration successfully executed" id="create user table v2" duration=873.846µs 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:33.607079713Z level=info msg="Executing migration" id="create index UQE_user_login - v2" 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:33.608204951Z level=info msg="Migration successfully executed" id="create index UQE_user_login - v2" duration=1.125058ms 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:33.613886902Z level=info msg="Executing migration" id="create index UQE_user_email - v2" 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:33.615160867Z level=info msg="Migration successfully executed" id="create index UQE_user_email - v2" duration=1.268596ms 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:33.61988684Z level=info msg="Executing migration" id="copy data_source v1 to v2" 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:33.620377844Z level=info msg="Migration successfully executed" id="copy data_source v1 to v2" duration=493.425µs 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:33.623752467Z level=info msg="Executing migration" id="Drop old table user_v1" 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:33.624326317Z level=info msg="Migration successfully executed" id="Drop old table user_v1" duration=575.45µs 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:33.631442911Z level=info msg="Executing migration" id="Add column help_flags1 to user table" 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:33.633291717Z level=info msg="Migration successfully executed" id="Add column help_flags1 to user table" duration=1.848385ms 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:33.637897142Z level=info msg="Executing migration" id="Update user table charset" 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:33.637939684Z level=info msg="Migration successfully executed" id="Update user table charset" duration=43.812µs 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:33.641242534Z level=info msg="Executing migration" id="Add last_seen_at column to user" 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:33.642997894Z level=info msg="Migration successfully executed" id="Add last_seen_at column to user" duration=1.75509ms 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:33.646287682Z level=info msg="Executing migration" id="Add missing user data" 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:33.646650961Z level=info msg="Migration successfully executed" id="Add missing user data" duration=363.209µs 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:33.651477719Z level=info msg="Executing migration" id="Add is_disabled column to user" 08:24:02 policy-apex-pdp | Waiting for mariadb port 3306... 08:24:02 policy-apex-pdp | Waiting for kafka port 9092... 08:24:02 policy-apex-pdp | mariadb (172.17.0.2:3306) open 08:24:02 policy-apex-pdp | kafka (172.17.0.8:9092) open 08:24:02 policy-apex-pdp | Waiting for pap port 6969... 08:24:02 policy-apex-pdp | pap (172.17.0.9:6969) open 08:24:02 policy-apex-pdp | apexApps.sh: running application 'onappf' with command 'java -Dlogback.configurationFile=/opt/app/policy/apex-pdp/etc/logback.xml -cp /opt/app/policy/apex-pdp/etc:/opt/app/policy/apex-pdp/etc/hazelcast:/opt/app/policy/apex-pdp/etc/infinispan:/opt/app/policy/apex-pdp/lib/* -Djavax.net.ssl.keyStore=/opt/app/policy/apex-pdp/etc/ssl/policy-keystore -Djavax.net.ssl.keyStorePassword=Pol1cy_0nap -Djavax.net.ssl.trustStore=/opt/app/policy/apex-pdp/etc/ssl/policy-truststore -Djavax.net.ssl.trustStorePassword=Pol1cy_0nap -Dlogback.configurationFile=/opt/app/policy/apex-pdp/etc/logback.xml -Dhazelcast.config=/opt/app/policy/apex-pdp/etc/hazelcast.xml -Dhazelcast.mancenter.enabled=false org.onap.policy.apex.services.onappf.ApexStarterMain -c /opt/app/policy/apex-pdp/etc/onappf/config/OnapPfConfig.json' 08:24:02 policy-apex-pdp | [2024-04-26T08:22:04.239+00:00|INFO|ApexStarterMain|main] In ApexStarter with parameters [-c, /opt/app/policy/apex-pdp/etc/onappf/config/OnapPfConfig.json] 08:24:02 policy-apex-pdp | [2024-04-26T08:22:04.410+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 08:24:02 policy-apex-pdp | allow.auto.create.topics = true 08:24:02 policy-apex-pdp | auto.commit.interval.ms = 5000 08:24:02 policy-apex-pdp | auto.include.jmx.reporter = true 08:24:02 policy-apex-pdp | auto.offset.reset = latest 08:24:02 policy-apex-pdp | bootstrap.servers = [kafka:9092] 08:24:02 policy-apex-pdp | check.crcs = true 08:24:02 policy-apex-pdp | client.dns.lookup = use_all_dns_ips 08:24:02 policy-apex-pdp | client.id = consumer-385d2de3-e329-4c2e-8254-58c110e4f277-1 08:24:02 policy-apex-pdp | client.rack = 08:24:02 policy-apex-pdp | connections.max.idle.ms = 540000 08:24:02 policy-apex-pdp | default.api.timeout.ms = 60000 08:24:02 policy-apex-pdp | enable.auto.commit = true 08:24:02 policy-apex-pdp | exclude.internal.topics = true 08:24:02 policy-apex-pdp | fetch.max.bytes = 52428800 08:24:02 policy-apex-pdp | fetch.max.wait.ms = 500 08:24:02 policy-apex-pdp | fetch.min.bytes = 1 08:24:02 policy-apex-pdp | group.id = 385d2de3-e329-4c2e-8254-58c110e4f277 08:24:02 policy-apex-pdp | group.instance.id = null 08:24:02 policy-apex-pdp | heartbeat.interval.ms = 3000 08:24:02 policy-apex-pdp | interceptor.classes = [] 08:24:02 policy-apex-pdp | internal.leave.group.on.close = true 08:24:02 policy-apex-pdp | internal.throw.on.fetch.stable.offset.unsupported = false 08:24:02 policy-apex-pdp | isolation.level = read_uncommitted 08:24:02 policy-apex-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 08:24:02 policy-apex-pdp | max.partition.fetch.bytes = 1048576 08:24:02 policy-apex-pdp | max.poll.interval.ms = 300000 08:24:02 policy-apex-pdp | max.poll.records = 500 08:24:02 policy-api | Waiting for mariadb port 3306... 08:24:02 policy-api | Waiting for policy-db-migrator port 6824... 08:24:02 policy-api | mariadb (172.17.0.2:3306) open 08:24:02 policy-api | policy-db-migrator (172.17.0.6:6824) open 08:24:02 policy-api | Policy api config file: /opt/app/policy/api/etc/apiParameters.yaml 08:24:02 policy-api | 08:24:02 policy-api | . ____ _ __ _ _ 08:24:02 policy-api | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ 08:24:02 policy-api | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ 08:24:02 policy-api | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) 08:24:02 policy-api | ' |____| .__|_| |_|_| |_\__, | / / / / 08:24:02 policy-api | =========|_|==============|___/=/_/_/_/ 08:24:02 policy-api | :: Spring Boot :: (v3.1.10) 08:24:02 policy-api | 08:24:02 policy-api | [2024-04-26T08:21:41.401+00:00|INFO|Version|background-preinit] HV000001: Hibernate Validator 8.0.1.Final 08:24:02 policy-api | [2024-04-26T08:21:41.461+00:00|INFO|PolicyApiApplication|main] Starting PolicyApiApplication using Java 17.0.11 with PID 21 (/app/api.jar started by policy in /opt/app/policy/api/bin) 08:24:02 policy-api | [2024-04-26T08:21:41.462+00:00|INFO|PolicyApiApplication|main] No active profile set, falling back to 1 default profile: "default" 08:24:02 policy-api | [2024-04-26T08:21:43.447+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. 08:24:02 policy-api | [2024-04-26T08:21:43.530+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 73 ms. Found 6 JPA repository interfaces. 08:24:02 policy-api | [2024-04-26T08:21:43.936+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.api.main.exception.ServiceExceptionHandler 08:24:02 policy-api | [2024-04-26T08:21:43.937+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.api.main.exception.ServiceExceptionHandler 08:24:02 policy-api | [2024-04-26T08:21:44.635+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port(s): 6969 (http) 08:24:02 policy-api | [2024-04-26T08:21:44.645+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] 08:24:02 policy-api | [2024-04-26T08:21:44.648+00:00|INFO|StandardService|main] Starting service [Tomcat] 08:24:02 policy-api | [2024-04-26T08:21:44.648+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.19] 08:24:02 policy-api | [2024-04-26T08:21:44.740+00:00|INFO|[/policy/api/v1]|main] Initializing Spring embedded WebApplicationContext 08:24:02 policy-api | [2024-04-26T08:21:44.740+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 3210 ms 08:24:02 policy-api | [2024-04-26T08:21:45.184+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] 08:24:02 policy-api | [2024-04-26T08:21:45.267+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.3.2.Final 08:24:02 policy-api | [2024-04-26T08:21:45.318+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled 08:24:02 policy-api | [2024-04-26T08:21:45.639+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer 08:24:02 policy-api | [2024-04-26T08:21:45.669+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... 08:24:02 policy-api | [2024-04-26T08:21:45.760+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.mariadb.jdbc.Connection@26844abb 08:24:02 policy-api | [2024-04-26T08:21:45.762+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. 08:24:02 policy-api | [2024-04-26T08:21:47.668+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) 08:24:02 policy-api | [2024-04-26T08:21:47.671+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' 08:24:02 policy-api | [2024-04-26T08:21:48.662+00:00|WARN|ApiDatabaseInitializer|main] Detected multi-versioned type: policytypes/onap.policies.monitoring.tcagen2.v2.yaml 08:24:02 policy-api | [2024-04-26T08:21:49.508+00:00|INFO|ApiDatabaseInitializer|main] Multi-versioned Service Template [onap.policies.Monitoring, onap.policies.monitoring.tcagen2] 08:24:02 policy-api | [2024-04-26T08:21:50.663+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning 08:24:02 policy-api | [2024-04-26T08:21:50.863+00:00|INFO|DefaultSecurityFilterChain|main] Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@2fcc32ae, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@5ef53e42, org.springframework.security.web.context.SecurityContextHolderFilter@54ce2da8, org.springframework.security.web.header.HeaderWriterFilter@5da1f9b9, org.springframework.security.web.authentication.logout.LogoutFilter@29726180, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@1ef46efc, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@1b48c142, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@3dc238ae, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@43ec61f0, org.springframework.security.web.access.ExceptionTranslationFilter@2929ef51, org.springframework.security.web.access.intercept.AuthorizationFilter@3405202c] 08:24:02 policy-api | [2024-04-26T08:21:51.718+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoint(s) beneath base path '' 08:24:02 policy-api | [2024-04-26T08:21:51.813+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] 08:24:02 policy-api | [2024-04-26T08:21:51.839+00:00|INFO|TomcatWebServer|main] Tomcat started on port(s): 6969 (http) with context path '/policy/api/v1' 08:24:02 policy-api | [2024-04-26T08:21:51.858+00:00|INFO|PolicyApiApplication|main] Started PolicyApiApplication in 11.19 seconds (process running for 11.864) 08:24:02 policy-api | [2024-04-26T08:22:08.892+00:00|INFO|[/policy/api/v1]|http-nio-6969-exec-2] Initializing Spring DispatcherServlet 'dispatcherServlet' 08:24:02 policy-api | [2024-04-26T08:22:08.892+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Initializing Servlet 'dispatcherServlet' 08:24:02 policy-api | [2024-04-26T08:22:08.893+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Completed initialization in 1 ms 08:24:02 policy-api | [2024-04-26T08:22:09.214+00:00|INFO|OrderedServiceImpl|http-nio-6969-exec-2] ***** OrderedServiceImpl implementers: 08:24:02 policy-api | [] 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:33.653119803Z level=info msg="Migration successfully executed" id="Add is_disabled column to user" duration=1.641143ms 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:33.656952929Z level=info msg="Executing migration" id="Add index user.login/user.email" 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:33.658046035Z level=info msg="Migration successfully executed" id="Add index user.login/user.email" duration=1.092287ms 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:33.661113842Z level=info msg="Executing migration" id="Add is_service_account column to user" 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:33.662935856Z level=info msg="Migration successfully executed" id="Add is_service_account column to user" duration=1.820313ms 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:33.666272597Z level=info msg="Executing migration" id="Update is_service_account column to nullable" 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:33.678966927Z level=info msg="Migration successfully executed" id="Update is_service_account column to nullable" duration=12.68514ms 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:33.683593775Z level=info msg="Executing migration" id="Add uid column to user" 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:33.684433648Z level=info msg="Migration successfully executed" id="Add uid column to user" duration=841.333µs 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:33.68702563Z level=info msg="Executing migration" id="Update uid column values for users" 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:33.687161598Z level=info msg="Migration successfully executed" id="Update uid column values for users" duration=137.408µs 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:33.689198132Z level=info msg="Executing migration" id="Add unique index user_uid" 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:33.690279207Z level=info msg="Migration successfully executed" id="Add unique index user_uid" duration=1.075815ms 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:33.693509673Z level=info msg="Executing migration" id="update login field with orgid to allow for multiple service accounts with same name across orgs" 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:33.694006308Z level=info msg="Migration successfully executed" id="update login field with orgid to allow for multiple service accounts with same name across orgs" duration=496.465µs 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:33.698684828Z level=info msg="Executing migration" id="create temp user table v1-7" 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:33.699523341Z level=info msg="Migration successfully executed" id="create temp user table v1-7" duration=838.013µs 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:33.702146595Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v1-7" 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:33.702875443Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v1-7" duration=728.188µs 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:33.711196709Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v1-7" 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:33.712261243Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v1-7" duration=1.064274ms 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:33.717799767Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v1-7" 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:33.718930416Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v1-7" duration=1.130049ms 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:33.721883557Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v1-7" 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:33.722998904Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v1-7" duration=1.115157ms 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:33.725944745Z level=info msg="Executing migration" id="Update temp_user table charset" 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:33.725973287Z level=info msg="Migration successfully executed" id="Update temp_user table charset" duration=27.211µs 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:33.7312952Z level=info msg="Executing migration" id="drop index IDX_temp_user_email - v1" 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:33.731961734Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_email - v1" duration=666.214µs 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:33.735376988Z level=info msg="Executing migration" id="drop index IDX_temp_user_org_id - v1" 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:33.736399491Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_org_id - v1" duration=1.022613ms 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:33.739646488Z level=info msg="Executing migration" id="drop index IDX_temp_user_code - v1" 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:33.740440928Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_code - v1" duration=795.22µs 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:33.744942089Z level=info msg="Executing migration" id="drop index IDX_temp_user_status - v1" 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:33.745589702Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_status - v1" duration=647.583µs 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:33.750515264Z level=info msg="Executing migration" id="Rename table temp_user to temp_user_tmp_qwerty - v1" 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:33.755287129Z level=info msg="Migration successfully executed" id="Rename table temp_user to temp_user_tmp_qwerty - v1" duration=4.768195ms 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:33.759703675Z level=info msg="Executing migration" id="create temp_user v2" 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:33.760569349Z level=info msg="Migration successfully executed" id="create temp_user v2" duration=867.094µs 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:33.765008278Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v2" 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:33.765768306Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v2" duration=759.698µs 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:33.768437243Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v2" 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:33.769520758Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v2" duration=1.088445ms 08:24:02 policy-apex-pdp | metadata.max.age.ms = 300000 08:24:02 policy-apex-pdp | metric.reporters = [] 08:24:02 policy-apex-pdp | metrics.num.samples = 2 08:24:02 policy-apex-pdp | metrics.recording.level = INFO 08:24:02 policy-apex-pdp | metrics.sample.window.ms = 30000 08:24:02 policy-apex-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 08:24:02 policy-apex-pdp | receive.buffer.bytes = 65536 08:24:02 policy-apex-pdp | reconnect.backoff.max.ms = 1000 08:24:02 policy-apex-pdp | reconnect.backoff.ms = 50 08:24:02 policy-apex-pdp | request.timeout.ms = 30000 08:24:02 policy-apex-pdp | retry.backoff.ms = 100 08:24:02 policy-apex-pdp | sasl.client.callback.handler.class = null 08:24:02 policy-apex-pdp | sasl.jaas.config = null 08:24:02 policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit 08:24:02 policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 08:24:02 policy-apex-pdp | sasl.kerberos.service.name = null 08:24:02 policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 08:24:02 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 08:24:02 policy-apex-pdp | sasl.login.callback.handler.class = null 08:24:02 policy-apex-pdp | sasl.login.class = null 08:24:02 policy-apex-pdp | sasl.login.connect.timeout.ms = null 08:24:02 policy-apex-pdp | sasl.login.read.timeout.ms = null 08:24:02 policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 08:24:02 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 08:24:02 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 08:24:02 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 08:24:02 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 08:24:02 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 08:24:02 policy-apex-pdp | sasl.mechanism = GSSAPI 08:24:02 policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 08:24:02 policy-apex-pdp | sasl.oauthbearer.expected.audience = null 08:24:02 policy-apex-pdp | sasl.oauthbearer.expected.issuer = null 08:24:02 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 08:24:02 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 08:24:02 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 08:24:02 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null 08:24:02 policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope 08:24:02 policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub 08:24:02 policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null 08:24:02 policy-apex-pdp | security.protocol = PLAINTEXT 08:24:02 policy-apex-pdp | security.providers = null 08:24:02 policy-apex-pdp | send.buffer.bytes = 131072 08:24:02 policy-apex-pdp | session.timeout.ms = 45000 08:24:02 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 08:24:02 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:33.772562385Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v2" 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:33.773629179Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v2" duration=1.066484ms 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:33.778075627Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v2" 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:33.778831836Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v2" duration=755.969µs 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:33.782131125Z level=info msg="Executing migration" id="copy temp_user v1 to v2" 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:33.782548797Z level=info msg="Migration successfully executed" id="copy temp_user v1 to v2" duration=417.802µs 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:33.785567481Z level=info msg="Executing migration" id="drop temp_user_tmp_qwerty" 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:33.787026026Z level=info msg="Migration successfully executed" id="drop temp_user_tmp_qwerty" duration=1.453764ms 08:24:02 mariadb | 2024-04-26 08:21:27+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.10.2+maria~ubu2204 started. 08:24:02 mariadb | 2024-04-26 08:21:27+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql' 08:24:02 mariadb | 2024-04-26 08:21:27+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.10.2+maria~ubu2204 started. 08:24:02 mariadb | 2024-04-26 08:21:27+00:00 [Note] [Entrypoint]: Initializing database files 08:24:02 mariadb | 2024-04-26 8:21:27 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) 08:24:02 mariadb | 2024-04-26 8:21:27 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF 08:24:02 mariadb | 2024-04-26 8:21:27 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. 08:24:02 mariadb | 08:24:02 mariadb | 08:24:02 mariadb | PLEASE REMEMBER TO SET A PASSWORD FOR THE MariaDB root USER ! 08:24:02 mariadb | To do so, start the server, then issue the following command: 08:24:02 mariadb | 08:24:02 mariadb | '/usr/bin/mysql_secure_installation' 08:24:02 mariadb | 08:24:02 mariadb | which will also give you the option of removing the test 08:24:02 mariadb | databases and anonymous user created by default. This is 08:24:02 mariadb | strongly recommended for production servers. 08:24:02 mariadb | 08:24:02 mariadb | See the MariaDB Knowledgebase at https://mariadb.com/kb 08:24:02 mariadb | 08:24:02 mariadb | Please report any problems at https://mariadb.org/jira 08:24:02 mariadb | 08:24:02 mariadb | The latest information about MariaDB is available at https://mariadb.org/. 08:24:02 mariadb | 08:24:02 mariadb | Consider joining MariaDB's strong and vibrant community: 08:24:02 mariadb | https://mariadb.org/get-involved/ 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:33.793460415Z level=info msg="Executing migration" id="Set created for temp users that will otherwise prematurely expire" 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:33.794002703Z level=info msg="Migration successfully executed" id="Set created for temp users that will otherwise prematurely expire" duration=542.298µs 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:33.800056274Z level=info msg="Executing migration" id="create star table" 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:33.801007762Z level=info msg="Migration successfully executed" id="create star table" duration=951.338µs 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:33.804201426Z level=info msg="Executing migration" id="add unique index star.user_id_dashboard_id" 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:33.805317663Z level=info msg="Migration successfully executed" id="add unique index star.user_id_dashboard_id" duration=1.115877ms 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:33.809776062Z level=info msg="Executing migration" id="create org table v1" 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:33.810651297Z level=info msg="Migration successfully executed" id="create org table v1" duration=875.855µs 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:33.815229632Z level=info msg="Executing migration" id="create index UQE_org_name - v1" 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:33.816004812Z level=info msg="Migration successfully executed" id="create index UQE_org_name - v1" duration=774.679µs 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:33.819424437Z level=info msg="Executing migration" id="create org_user table v1" 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:33.820149944Z level=info msg="Migration successfully executed" id="create org_user table v1" duration=724.477µs 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:33.823336507Z level=info msg="Executing migration" id="create index IDX_org_user_org_id - v1" 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:33.824096206Z level=info msg="Migration successfully executed" id="create index IDX_org_user_org_id - v1" duration=759.349µs 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:33.827424557Z level=info msg="Executing migration" id="create index UQE_org_user_org_id_user_id - v1" 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:33.828188586Z level=info msg="Migration successfully executed" id="create index UQE_org_user_org_id_user_id - v1" duration=762.929µs 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:33.834215585Z level=info msg="Executing migration" id="create index IDX_org_user_user_id - v1" 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:33.835012016Z level=info msg="Migration successfully executed" id="create index IDX_org_user_user_id - v1" duration=796.281µs 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:33.842010574Z level=info msg="Executing migration" id="Update org table charset" 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:33.842048856Z level=info msg="Migration successfully executed" id="Update org table charset" duration=39.082µs 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:33.845278671Z level=info msg="Executing migration" id="Update org_user table charset" 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:33.845316404Z level=info msg="Migration successfully executed" id="Update org_user table charset" duration=38.143µs 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:33.848486426Z level=info msg="Executing migration" id="Migrate all Read Only Viewers to Viewers" 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:33.848738229Z level=info msg="Migration successfully executed" id="Migrate all Read Only Viewers to Viewers" duration=256.902µs 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:33.853408218Z level=info msg="Executing migration" id="create dashboard table" 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:33.85460325Z level=info msg="Migration successfully executed" id="create dashboard table" duration=1.194602ms 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:33.85948368Z level=info msg="Executing migration" id="add index dashboard.account_id" 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:33.8606709Z level=info msg="Migration successfully executed" id="add index dashboard.account_id" duration=1.1863ms 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:33.8639836Z level=info msg="Executing migration" id="add unique index dashboard_account_id_slug" 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:33.864856775Z level=info msg="Migration successfully executed" id="add unique index dashboard_account_id_slug" duration=873.495µs 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:33.867891651Z level=info msg="Executing migration" id="create dashboard_tag table" 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:33.868546165Z level=info msg="Migration successfully executed" id="create dashboard_tag table" duration=654.133µs 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:33.872514687Z level=info msg="Executing migration" id="add unique index dashboard_tag.dasboard_id_term" 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:33.873715829Z level=info msg="Migration successfully executed" id="add unique index dashboard_tag.dasboard_id_term" duration=1.200772ms 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:33.880148349Z level=info msg="Executing migration" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:33.881792244Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" duration=1.642574ms 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:33.891939693Z level=info msg="Executing migration" id="Rename table dashboard to dashboard_v1 - v1" 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:33.899978746Z level=info msg="Migration successfully executed" id="Rename table dashboard to dashboard_v1 - v1" duration=8.040903ms 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:33.906148111Z level=info msg="Executing migration" id="create dashboard v2" 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:33.906722451Z level=info msg="Migration successfully executed" id="create dashboard v2" duration=574.44µs 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:33.910100134Z level=info msg="Executing migration" id="create index IDX_dashboard_org_id - v2" 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:33.911506486Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_org_id - v2" duration=1.410032ms 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:33.916202887Z level=info msg="Executing migration" id="create index UQE_dashboard_org_id_slug - v2" 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:33.917519285Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_org_id_slug - v2" duration=1.316278ms 08:24:02 policy-db-migrator | Waiting for mariadb port 3306... 08:24:02 policy-db-migrator | nc: connect to mariadb (172.17.0.2) port 3306 (tcp) failed: Connection refused 08:24:02 policy-db-migrator | nc: connect to mariadb (172.17.0.2) port 3306 (tcp) failed: Connection refused 08:24:02 policy-db-migrator | nc: connect to mariadb (172.17.0.2) port 3306 (tcp) failed: Connection refused 08:24:02 policy-db-migrator | nc: connect to mariadb (172.17.0.2) port 3306 (tcp) failed: Connection refused 08:24:02 policy-db-migrator | nc: connect to mariadb (172.17.0.2) port 3306 (tcp) failed: Connection refused 08:24:02 policy-db-migrator | Connection to mariadb (172.17.0.2) 3306 port [tcp/mysql] succeeded! 08:24:02 policy-db-migrator | 321 blocks 08:24:02 policy-db-migrator | Preparing upgrade release version: 0800 08:24:02 policy-db-migrator | Preparing upgrade release version: 0900 08:24:02 policy-db-migrator | Preparing upgrade release version: 1000 08:24:02 policy-db-migrator | Preparing upgrade release version: 1100 08:24:02 policy-db-migrator | Preparing upgrade release version: 1200 08:24:02 policy-db-migrator | Preparing upgrade release version: 1300 08:24:02 policy-db-migrator | Done 08:24:02 policy-db-migrator | name version 08:24:02 policy-db-migrator | policyadmin 0 08:24:02 policy-db-migrator | policyadmin: upgrade available: 0 -> 1300 08:24:02 policy-db-migrator | upgrade: 0 -> 1300 08:24:02 policy-db-migrator | 08:24:02 policy-db-migrator | > upgrade 0100-jpapdpgroup_properties.sql 08:24:02 policy-db-migrator | -------------- 08:24:02 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpgroup_properties (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PROPERTIES VARCHAR(255) DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) DEFAULT NULL) 08:24:02 policy-db-migrator | -------------- 08:24:02 policy-db-migrator | 08:24:02 policy-db-migrator | 08:24:02 policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql 08:24:02 policy-db-migrator | -------------- 08:24:02 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpstatistics_enginestats (AVERAGEEXECUTIONTIME DOUBLE DEFAULT NULL, ENGINEID VARCHAR(255) DEFAULT NULL, ENGINETIMESTAMP BIGINT DEFAULT NULL, ENGINEWORKERSTATE INT DEFAULT NULL, EVENTCOUNT BIGINT DEFAULT NULL, LASTENTERTIME BIGINT DEFAULT NULL, LASTEXECUTIONTIME BIGINT DEFAULT NULL, LASTSTART BIGINT DEFAULT NULL, UPTIME BIGINT DEFAULT NULL, timeStamp datetime DEFAULT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL) 08:24:02 policy-db-migrator | -------------- 08:24:02 policy-db-migrator | 08:24:02 policy-db-migrator | 08:24:02 policy-db-migrator | > upgrade 0120-jpapdpsubgroup_policies.sql 08:24:02 policy-db-migrator | -------------- 08:24:02 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_policies (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL) 08:24:02 policy-db-migrator | -------------- 08:24:02 policy-db-migrator | 08:24:02 policy-db-migrator | 08:24:02 policy-db-migrator | > upgrade 0130-jpapdpsubgroup_properties.sql 08:24:02 policy-db-migrator | -------------- 08:24:02 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_properties (parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL, PROPERTIES VARCHAR(255) DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) DEFAULT NULL) 08:24:02 policy-db-migrator | -------------- 08:24:02 policy-db-migrator | 08:24:02 policy-db-migrator | 08:24:02 policy-db-migrator | > upgrade 0140-jpapdpsubgroup_supportedpolicytypes.sql 08:24:02 policy-db-migrator | -------------- 08:24:02 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_supportedpolicytypes (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL) 08:24:02 policy-db-migrator | -------------- 08:24:02 policy-db-migrator | 08:24:02 policy-db-migrator | 08:24:02 policy-db-migrator | > upgrade 0150-jpatoscacapabilityassignment_attributes.sql 08:24:02 policy-db-migrator | -------------- 08:24:02 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_attributes (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, ATTRIBUTES LONGTEXT DEFAULT NULL, ATTRIBUTES_KEY VARCHAR(255) DEFAULT NULL) 08:24:02 policy-db-migrator | -------------- 08:24:02 policy-db-migrator | 08:24:02 policy-db-migrator | 08:24:02 policy-db-migrator | > upgrade 0160-jpatoscacapabilityassignment_metadata.sql 08:24:02 policy-db-migrator | -------------- 08:24:02 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 08:24:02 policy-db-migrator | -------------- 08:24:02 policy-db-migrator | 08:24:02 policy-db-migrator | 08:24:02 policy-db-migrator | > upgrade 0170-jpatoscacapabilityassignment_occurrences.sql 08:24:02 policy-db-migrator | -------------- 08:24:02 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_occurrences (name VARCHAR(120) NULL, version VARCHAR(20) NULL, OCCURRENCES INT DEFAULT NULL) 08:24:02 policy-db-migrator | -------------- 08:24:02 policy-db-migrator | 08:24:02 policy-db-migrator | 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:33.924300762Z level=info msg="Executing migration" id="copy dashboard v1 to v2" 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:33.924807318Z level=info msg="Migration successfully executed" id="copy dashboard v1 to v2" duration=506.226µs 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:33.930141182Z level=info msg="Executing migration" id="drop table dashboard_v1" 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:33.931497881Z level=info msg="Migration successfully executed" id="drop table dashboard_v1" duration=1.35871ms 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:33.936707008Z level=info msg="Executing migration" id="alter dashboard.data to mediumtext v1" 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:33.936773591Z level=info msg="Migration successfully executed" id="alter dashboard.data to mediumtext v1" duration=67.003µs 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.004694572Z level=info msg="Executing migration" id="Add column updated_by in dashboard - v2" 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.007615953Z level=info msg="Migration successfully executed" id="Add column updated_by in dashboard - v2" duration=2.921891ms 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.011052011Z level=info msg="Executing migration" id="Add column created_by in dashboard - v2" 08:24:02 policy-apex-pdp | ssl.cipher.suites = null 08:24:02 policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 08:24:02 policy-apex-pdp | ssl.endpoint.identification.algorithm = https 08:24:02 policy-apex-pdp | ssl.engine.factory.class = null 08:24:02 policy-apex-pdp | ssl.key.password = null 08:24:02 policy-apex-pdp | ssl.keymanager.algorithm = SunX509 08:24:02 policy-apex-pdp | ssl.keystore.certificate.chain = null 08:24:02 policy-apex-pdp | ssl.keystore.key = null 08:24:02 policy-apex-pdp | ssl.keystore.location = null 08:24:02 policy-apex-pdp | ssl.keystore.password = null 08:24:02 policy-apex-pdp | ssl.keystore.type = JKS 08:24:02 policy-apex-pdp | ssl.protocol = TLSv1.3 08:24:02 policy-apex-pdp | ssl.provider = null 08:24:02 policy-apex-pdp | ssl.secure.random.implementation = null 08:24:02 policy-apex-pdp | ssl.trustmanager.algorithm = PKIX 08:24:02 policy-apex-pdp | ssl.truststore.certificates = null 08:24:02 policy-apex-pdp | ssl.truststore.location = null 08:24:02 policy-apex-pdp | ssl.truststore.password = null 08:24:02 policy-apex-pdp | ssl.truststore.type = JKS 08:24:02 policy-apex-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 08:24:02 policy-apex-pdp | 08:24:02 policy-apex-pdp | [2024-04-26T08:22:04.603+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 08:24:02 policy-apex-pdp | [2024-04-26T08:22:04.603+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 08:24:02 policy-apex-pdp | [2024-04-26T08:22:04.603+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1714119724602 08:24:02 policy-apex-pdp | [2024-04-26T08:22:04.605+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-385d2de3-e329-4c2e-8254-58c110e4f277-1, groupId=385d2de3-e329-4c2e-8254-58c110e4f277] Subscribed to topic(s): policy-pdp-pap 08:24:02 policy-apex-pdp | [2024-04-26T08:22:04.616+00:00|INFO|ServiceManager|main] service manager starting 08:24:02 policy-apex-pdp | [2024-04-26T08:22:04.616+00:00|INFO|ServiceManager|main] service manager starting topics 08:24:02 policy-apex-pdp | [2024-04-26T08:22:04.617+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=385d2de3-e329-4c2e-8254-58c110e4f277, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: starting 08:24:02 policy-apex-pdp | [2024-04-26T08:22:04.636+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 08:24:02 policy-apex-pdp | allow.auto.create.topics = true 08:24:02 policy-apex-pdp | auto.commit.interval.ms = 5000 08:24:02 policy-apex-pdp | auto.include.jmx.reporter = true 08:24:02 policy-apex-pdp | auto.offset.reset = latest 08:24:02 policy-apex-pdp | bootstrap.servers = [kafka:9092] 08:24:02 policy-apex-pdp | check.crcs = true 08:24:02 policy-apex-pdp | client.dns.lookup = use_all_dns_ips 08:24:02 policy-apex-pdp | client.id = consumer-385d2de3-e329-4c2e-8254-58c110e4f277-2 08:24:02 policy-apex-pdp | client.rack = 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.01297608Z level=info msg="Migration successfully executed" id="Add column created_by in dashboard - v2" duration=1.92774ms 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.017275132Z level=info msg="Executing migration" id="Add column gnetId in dashboard" 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.019037623Z level=info msg="Migration successfully executed" id="Add column gnetId in dashboard" duration=1.762061ms 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.022640129Z level=info msg="Executing migration" id="Add index for gnetId in dashboard" 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.023492513Z level=info msg="Migration successfully executed" id="Add index for gnetId in dashboard" duration=850.704µs 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.030282874Z level=info msg="Executing migration" id="Add column plugin_id in dashboard" 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.033974084Z level=info msg="Migration successfully executed" id="Add column plugin_id in dashboard" duration=3.68868ms 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.038881847Z level=info msg="Executing migration" id="Add index for plugin_id in dashboard" 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.03970398Z level=info msg="Migration successfully executed" id="Add index for plugin_id in dashboard" duration=821.932µs 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.042600499Z level=info msg="Executing migration" id="Add index for dashboard_id in dashboard_tag" 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.043441633Z level=info msg="Migration successfully executed" id="Add index for dashboard_id in dashboard_tag" duration=840.964µs 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.04842461Z level=info msg="Executing migration" id="Update dashboard table charset" 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.048453242Z level=info msg="Migration successfully executed" id="Update dashboard table charset" duration=29.372µs 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.056054174Z level=info msg="Executing migration" id="Update dashboard_tag table charset" 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.05618193Z level=info msg="Migration successfully executed" id="Update dashboard_tag table charset" duration=133.896µs 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.059691392Z level=info msg="Executing migration" id="Add column folder_id in dashboard" 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.062005581Z level=info msg="Migration successfully executed" id="Add column folder_id in dashboard" duration=2.3137ms 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.066917194Z level=info msg="Executing migration" id="Add column isFolder in dashboard" 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.068437032Z level=info msg="Migration successfully executed" id="Add column isFolder in dashboard" duration=1.519588ms 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.076500409Z level=info msg="Executing migration" id="Add column has_acl in dashboard" 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.079194658Z level=info msg="Migration successfully executed" id="Add column has_acl in dashboard" duration=2.693009ms 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.083616926Z level=info msg="Executing migration" id="Add column uid in dashboard" 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.086929528Z level=info msg="Migration successfully executed" id="Add column uid in dashboard" duration=3.311871ms 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.091144975Z level=info msg="Executing migration" id="Update uid column values in dashboard" 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.091695963Z level=info msg="Migration successfully executed" id="Update uid column values in dashboard" duration=550.188µs 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.09531286Z level=info msg="Executing migration" id="Add unique index dashboard_org_id_uid" 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.096616967Z level=info msg="Migration successfully executed" id="Add unique index dashboard_org_id_uid" duration=1.307496ms 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.10092096Z level=info msg="Executing migration" id="Remove unique index org_id_slug" 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.10209845Z level=info msg="Migration successfully executed" id="Remove unique index org_id_slug" duration=1.179941ms 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.106004792Z level=info msg="Executing migration" id="Update dashboard title length" 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.106038473Z level=info msg="Migration successfully executed" id="Update dashboard title length" duration=31.621µs 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.109528834Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_title_folder_id" 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.110325515Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_title_folder_id" duration=796.511µs 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.115688272Z level=info msg="Executing migration" id="create dashboard_provisioning" 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.116286243Z level=info msg="Migration successfully executed" id="create dashboard_provisioning" duration=597.851µs 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.120223035Z level=info msg="Executing migration" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.126724011Z level=info msg="Migration successfully executed" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" duration=6.495075ms 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.129896805Z level=info msg="Executing migration" id="create dashboard_provisioning v2" 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.131492507Z level=info msg="Migration successfully executed" id="create dashboard_provisioning v2" duration=1.594902ms 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.136517696Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id - v2" 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.137590772Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id - v2" duration=1.074276ms 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.14201432Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.143199801Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" duration=1.184991ms 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.146367435Z level=info msg="Executing migration" id="copy dashboard_provisioning v1 to v2" 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.146829289Z level=info msg="Migration successfully executed" id="copy dashboard_provisioning v1 to v2" duration=461.923µs 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.150181952Z level=info msg="Executing migration" id="drop dashboard_provisioning_tmp_qwerty" 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.151054937Z level=info msg="Migration successfully executed" id="drop dashboard_provisioning_tmp_qwerty" duration=872.494µs 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.160071642Z level=info msg="Executing migration" id="Add check_sum column" 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.162802563Z level=info msg="Migration successfully executed" id="Add check_sum column" duration=2.730751ms 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.166854142Z level=info msg="Executing migration" id="Add index for dashboard_title" 08:24:02 kafka | ===> User 08:24:02 kafka | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) 08:24:02 kafka | ===> Configuring ... 08:24:02 kafka | Running in Zookeeper mode... 08:24:02 kafka | ===> Running preflight checks ... 08:24:02 kafka | ===> Check if /var/lib/kafka/data is writable ... 08:24:02 kafka | ===> Check if Zookeeper is healthy ... 08:24:02 kafka | [2024-04-26 08:21:34,285] INFO Client environment:zookeeper.version=3.8.3-6ad6d364c7c0bcf0de452d54ebefa3058098ab56, built on 2023-10-05 10:34 UTC (org.apache.zookeeper.ZooKeeper) 08:24:02 kafka | [2024-04-26 08:21:34,286] INFO Client environment:host.name=7c7374bf05f8 (org.apache.zookeeper.ZooKeeper) 08:24:02 kafka | [2024-04-26 08:21:34,286] INFO Client environment:java.version=11.0.22 (org.apache.zookeeper.ZooKeeper) 08:24:02 kafka | [2024-04-26 08:21:34,286] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) 08:24:02 kafka | [2024-04-26 08:21:34,286] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) 08:24:02 kafka | [2024-04-26 08:21:34,286] INFO Client environment:java.class.path=/usr/share/java/cp-base-new/scala-logging_2.13-3.9.4.jar:/usr/share/java/cp-base-new/jackson-datatype-jdk8-2.14.2.jar:/usr/share/java/cp-base-new/kafka-storage-api-7.6.1-ccs.jar:/usr/share/java/cp-base-new/commons-validator-1.7.jar:/usr/share/java/cp-base-new/logredactor-1.0.12.jar:/usr/share/java/cp-base-new/jolokia-core-1.7.1.jar:/usr/share/java/cp-base-new/re2j-1.6.jar:/usr/share/java/cp-base-new/scala-library-2.13.10.jar:/usr/share/java/cp-base-new/commons-io-2.11.0.jar:/usr/share/java/cp-base-new/kafka-clients-7.6.1-ccs.jar:/usr/share/java/cp-base-new/utility-belt-7.6.1.jar:/usr/share/java/cp-base-new/commons-cli-1.4.jar:/usr/share/java/cp-base-new/slf4j-reload4j-1.7.36.jar:/usr/share/java/cp-base-new/kafka-server-common-7.6.1-ccs.jar:/usr/share/java/cp-base-new/kafka-group-coordinator-7.6.1-ccs.jar:/usr/share/java/cp-base-new/common-utils-7.6.1.jar:/usr/share/java/cp-base-new/jackson-annotations-2.14.2.jar:/usr/share/java/cp-base-new/json-simple-1.1.1.jar:/usr/share/java/cp-base-new/jackson-module-scala_2.13-2.14.2.jar:/usr/share/java/cp-base-new/scala-java8-compat_2.13-1.0.2.jar:/usr/share/java/cp-base-new/minimal-json-0.9.5.jar:/usr/share/java/cp-base-new/kafka-storage-7.6.1-ccs.jar:/usr/share/java/cp-base-new/zstd-jni-1.5.5-1.jar:/usr/share/java/cp-base-new/jackson-dataformat-yaml-2.14.2.jar:/usr/share/java/cp-base-new/slf4j-api-1.7.36.jar:/usr/share/java/cp-base-new/paranamer-2.8.jar:/usr/share/java/cp-base-new/commons-beanutils-1.9.4.jar:/usr/share/java/cp-base-new/jmx_prometheus_javaagent-0.18.0.jar:/usr/share/java/cp-base-new/reload4j-1.2.25.jar:/usr/share/java/cp-base-new/jackson-core-2.14.2.jar:/usr/share/java/cp-base-new/disk-usage-agent-7.6.1.jar:/usr/share/java/cp-base-new/commons-digester-2.1.jar:/usr/share/java/cp-base-new/argparse4j-0.7.0.jar:/usr/share/java/cp-base-new/audience-annotations-0.12.0.jar:/usr/share/java/cp-base-new/kafka-raft-7.6.1-ccs.jar:/usr/share/java/cp-base-new/gson-2.9.0.jar:/usr/share/java/cp-base-new/snakeyaml-2.0.jar:/usr/share/java/cp-base-new/jopt-simple-5.0.4.jar:/usr/share/java/cp-base-new/error_prone_annotations-2.10.0.jar:/usr/share/java/cp-base-new/lz4-java-1.8.0.jar:/usr/share/java/cp-base-new/logredactor-metrics-1.0.12.jar:/usr/share/java/cp-base-new/checker-qual-3.19.0.jar:/usr/share/java/cp-base-new/kafka-metadata-7.6.1-ccs.jar:/usr/share/java/cp-base-new/pcollections-4.0.1.jar:/usr/share/java/cp-base-new/commons-logging-1.2.jar:/usr/share/java/cp-base-new/scala-reflect-2.13.10.jar:/usr/share/java/cp-base-new/scala-collection-compat_2.13-2.10.0.jar:/usr/share/java/cp-base-new/metrics-core-2.2.0.jar:/usr/share/java/cp-base-new/zookeeper-jute-3.8.3.jar:/usr/share/java/cp-base-new/jackson-dataformat-csv-2.14.2.jar:/usr/share/java/cp-base-new/commons-collections-3.2.2.jar:/usr/share/java/cp-base-new/caffeine-2.9.3.jar:/usr/share/java/cp-base-new/jolokia-jvm-1.7.1.jar:/usr/share/java/cp-base-new/kafka-tools-api-7.6.1-ccs.jar:/usr/share/java/cp-base-new/zookeeper-3.8.3.jar:/usr/share/java/cp-base-new/metrics-core-4.1.12.1.jar:/usr/share/java/cp-base-new/kafka_2.13-7.6.1-ccs.jar:/usr/share/java/cp-base-new/jackson-databind-2.14.2.jar:/usr/share/java/cp-base-new/snappy-java-1.1.10.5.jar:/usr/share/java/cp-base-new/jose4j-0.9.5.jar (org.apache.zookeeper.ZooKeeper) 08:24:02 kafka | [2024-04-26 08:21:34,286] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) 08:24:02 kafka | [2024-04-26 08:21:34,286] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) 08:24:02 kafka | [2024-04-26 08:21:34,287] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) 08:24:02 kafka | [2024-04-26 08:21:34,287] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) 08:24:02 kafka | [2024-04-26 08:21:34,287] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) 08:24:02 kafka | [2024-04-26 08:21:34,287] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) 08:24:02 kafka | [2024-04-26 08:21:34,287] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) 08:24:02 kafka | [2024-04-26 08:21:34,287] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) 08:24:02 kafka | [2024-04-26 08:21:34,287] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) 08:24:02 kafka | [2024-04-26 08:21:34,287] INFO Client environment:os.memory.free=493MB (org.apache.zookeeper.ZooKeeper) 08:24:02 kafka | [2024-04-26 08:21:34,287] INFO Client environment:os.memory.max=8042MB (org.apache.zookeeper.ZooKeeper) 08:24:02 kafka | [2024-04-26 08:21:34,287] INFO Client environment:os.memory.total=504MB (org.apache.zookeeper.ZooKeeper) 08:24:02 kafka | [2024-04-26 08:21:34,290] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=40000 watcher=io.confluent.admin.utils.ZookeeperConnectionWatcher@b7f23d9 (org.apache.zookeeper.ZooKeeper) 08:24:02 kafka | [2024-04-26 08:21:34,293] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) 08:24:02 kafka | [2024-04-26 08:21:34,298] INFO jute.maxbuffer value is 1048575 Bytes (org.apache.zookeeper.ClientCnxnSocket) 08:24:02 kafka | [2024-04-26 08:21:34,307] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) 08:24:02 kafka | [2024-04-26 08:21:34,334] INFO Opening socket connection to server zookeeper/172.17.0.4:2181. (org.apache.zookeeper.ClientCnxn) 08:24:02 kafka | [2024-04-26 08:21:34,334] INFO SASL config status: Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn) 08:24:02 kafka | [2024-04-26 08:21:34,343] INFO Socket connection established, initiating session, client: /172.17.0.8:54350, server: zookeeper/172.17.0.4:2181 (org.apache.zookeeper.ClientCnxn) 08:24:02 kafka | [2024-04-26 08:21:34,378] INFO Session establishment complete on server zookeeper/172.17.0.4:2181, session id = 0x1000003a6a90000, negotiated timeout = 40000 (org.apache.zookeeper.ClientCnxn) 08:24:02 kafka | [2024-04-26 08:21:34,501] INFO Session: 0x1000003a6a90000 closed (org.apache.zookeeper.ZooKeeper) 08:24:02 kafka | [2024-04-26 08:21:34,502] INFO EventThread shut down for session: 0x1000003a6a90000 (org.apache.zookeeper.ClientCnxn) 08:24:02 kafka | Using log4j config /etc/kafka/log4j.properties 08:24:02 policy-apex-pdp | connections.max.idle.ms = 540000 08:24:02 policy-apex-pdp | default.api.timeout.ms = 60000 08:24:02 policy-apex-pdp | enable.auto.commit = true 08:24:02 policy-apex-pdp | exclude.internal.topics = true 08:24:02 policy-apex-pdp | fetch.max.bytes = 52428800 08:24:02 policy-apex-pdp | fetch.max.wait.ms = 500 08:24:02 policy-apex-pdp | fetch.min.bytes = 1 08:24:02 policy-apex-pdp | group.id = 385d2de3-e329-4c2e-8254-58c110e4f277 08:24:02 policy-apex-pdp | group.instance.id = null 08:24:02 policy-apex-pdp | heartbeat.interval.ms = 3000 08:24:02 policy-apex-pdp | interceptor.classes = [] 08:24:02 policy-apex-pdp | internal.leave.group.on.close = true 08:24:02 policy-apex-pdp | internal.throw.on.fetch.stable.offset.unsupported = false 08:24:02 policy-apex-pdp | isolation.level = read_uncommitted 08:24:02 policy-apex-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 08:24:02 policy-apex-pdp | max.partition.fetch.bytes = 1048576 08:24:02 policy-apex-pdp | max.poll.interval.ms = 300000 08:24:02 policy-apex-pdp | max.poll.records = 500 08:24:02 policy-apex-pdp | metadata.max.age.ms = 300000 08:24:02 policy-apex-pdp | metric.reporters = [] 08:24:02 policy-apex-pdp | metrics.num.samples = 2 08:24:02 policy-apex-pdp | metrics.recording.level = INFO 08:24:02 policy-apex-pdp | metrics.sample.window.ms = 30000 08:24:02 policy-apex-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 08:24:02 policy-apex-pdp | receive.buffer.bytes = 65536 08:24:02 policy-apex-pdp | reconnect.backoff.max.ms = 1000 08:24:02 policy-apex-pdp | reconnect.backoff.ms = 50 08:24:02 policy-apex-pdp | request.timeout.ms = 30000 08:24:02 policy-apex-pdp | retry.backoff.ms = 100 08:24:02 policy-apex-pdp | sasl.client.callback.handler.class = null 08:24:02 policy-apex-pdp | sasl.jaas.config = null 08:24:02 policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit 08:24:02 policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 08:24:02 policy-apex-pdp | sasl.kerberos.service.name = null 08:24:02 policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 08:24:02 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 08:24:02 policy-apex-pdp | sasl.login.callback.handler.class = null 08:24:02 policy-apex-pdp | sasl.login.class = null 08:24:02 policy-apex-pdp | sasl.login.connect.timeout.ms = null 08:24:02 policy-apex-pdp | sasl.login.read.timeout.ms = null 08:24:02 policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 08:24:02 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 08:24:02 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 08:24:02 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 08:24:02 policy-db-migrator | > upgrade 0180-jpatoscacapabilityassignment_properties.sql 08:24:02 policy-db-migrator | -------------- 08:24:02 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 08:24:02 policy-db-migrator | -------------- 08:24:02 policy-db-migrator | 08:24:02 policy-db-migrator | 08:24:02 policy-db-migrator | > upgrade 0190-jpatoscacapabilitytype_metadata.sql 08:24:02 policy-db-migrator | -------------- 08:24:02 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilitytype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 08:24:02 policy-db-migrator | -------------- 08:24:02 policy-db-migrator | 08:24:02 policy-db-migrator | 08:24:02 policy-db-migrator | > upgrade 0200-jpatoscacapabilitytype_properties.sql 08:24:02 policy-db-migrator | -------------- 08:24:02 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilitytype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 08:24:02 policy-db-migrator | -------------- 08:24:02 policy-db-migrator | 08:24:02 policy-db-migrator | 08:24:02 policy-db-migrator | > upgrade 0210-jpatoscadatatype_constraints.sql 08:24:02 policy-db-migrator | -------------- 08:24:02 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_constraints (name VARCHAR(120) NULL, version VARCHAR(20) NULL, CONSTRAINTS VARCHAR(255) NULL) 08:24:02 policy-db-migrator | -------------- 08:24:02 policy-db-migrator | 08:24:02 policy-db-migrator | 08:24:02 policy-db-migrator | > upgrade 0220-jpatoscadatatype_metadata.sql 08:24:02 policy-db-migrator | -------------- 08:24:02 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 08:24:02 policy-db-migrator | -------------- 08:24:02 policy-db-migrator | 08:24:02 policy-db-migrator | 08:24:02 policy-db-migrator | > upgrade 0230-jpatoscadatatype_properties.sql 08:24:02 policy-db-migrator | -------------- 08:24:02 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 08:24:02 policy-db-migrator | -------------- 08:24:02 policy-db-migrator | 08:24:02 policy-db-migrator | 08:24:02 policy-db-migrator | > upgrade 0240-jpatoscanodetemplate_metadata.sql 08:24:02 policy-db-migrator | -------------- 08:24:02 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetemplate_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 08:24:02 policy-db-migrator | -------------- 08:24:02 policy-db-migrator | 08:24:02 policy-db-migrator | 08:24:02 policy-db-migrator | > upgrade 0250-jpatoscanodetemplate_properties.sql 08:24:02 policy-db-migrator | -------------- 08:24:02 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetemplate_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 08:24:02 policy-db-migrator | -------------- 08:24:02 policy-db-migrator | 08:24:02 kafka | ===> Launching ... 08:24:02 kafka | ===> Launching kafka ... 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.167835003Z level=info msg="Migration successfully executed" id="Add index for dashboard_title" duration=980.521µs 08:24:02 policy-db-migrator | 08:24:02 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 08:24:02 kafka | [2024-04-26 08:21:35,224] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$) 08:24:02 mariadb | 08:24:02 mariadb | 2024-04-26 08:21:28+00:00 [Note] [Entrypoint]: Database files initialized 08:24:02 mariadb | 2024-04-26 08:21:28+00:00 [Note] [Entrypoint]: Starting temporary server 08:24:02 prometheus | ts=2024-04-26T08:21:32.336Z caller=main.go:573 level=info msg="No time or size retention was set so using the default time retention" duration=15d 08:24:02 policy-db-migrator | > upgrade 0260-jpatoscanodetype_metadata.sql 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.171196206Z level=info msg="Executing migration" id="delete tags for deleted dashboards" 08:24:02 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 08:24:02 kafka | [2024-04-26 08:21:35,533] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) 08:24:02 zookeeper | ===> User 08:24:02 mariadb | 2024-04-26 08:21:28+00:00 [Note] [Entrypoint]: Waiting for server startup 08:24:02 mariadb | 2024-04-26 8:21:28 0 [Note] mariadbd (server 10.10.2-MariaDB-1:10.10.2+maria~ubu2204) starting as process 98 ... 08:24:02 mariadb | 2024-04-26 8:21:28 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 08:24:02 prometheus | ts=2024-04-26T08:21:32.336Z caller=main.go:617 level=info msg="Starting Prometheus Server" mode=server version="(version=2.51.2, branch=HEAD, revision=b4c0ab52c3e9b940ab803581ddae9b3d9a452337)" 08:24:02 policy-db-migrator | -------------- 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.171555964Z level=info msg="Migration successfully executed" id="delete tags for deleted dashboards" duration=359.388µs 08:24:02 policy-apex-pdp | sasl.mechanism = GSSAPI 08:24:02 kafka | [2024-04-26 08:21:35,635] INFO Registered signal handlers for TERM, INT, HUP (org.apache.kafka.common.utils.LoggingSignalHandler) 08:24:02 zookeeper | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) 08:24:02 mariadb | 2024-04-26 8:21:28 0 [Note] InnoDB: Number of transaction pools: 1 08:24:02 mariadb | 2024-04-26 8:21:28 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions 08:24:02 mariadb | 2024-04-26 8:21:28 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts) 08:24:02 prometheus | ts=2024-04-26T08:21:32.336Z caller=main.go:622 level=info build_context="(go=go1.22.2, platform=linux/amd64, user=root@b63f02a423d9, date=20240410-14:05:54, tags=netgo,builtinassets,stringlabels)" 08:24:02 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.176987155Z level=info msg="Executing migration" id="delete stars for deleted dashboards" 08:24:02 policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 08:24:02 kafka | [2024-04-26 08:21:35,637] INFO starting (kafka.server.KafkaServer) 08:24:02 zookeeper | ===> Configuring ... 08:24:02 mariadb | 2024-04-26 8:21:28 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) 08:24:02 mariadb | 2024-04-26 8:21:28 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF 08:24:02 mariadb | 2024-04-26 8:21:28 0 [Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB 08:24:02 prometheus | ts=2024-04-26T08:21:32.336Z caller=main.go:623 level=info host_details="(Linux 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 prometheus (none))" 08:24:02 policy-db-migrator | -------------- 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.17726839Z level=info msg="Migration successfully executed" id="delete stars for deleted dashboards" duration=280.605µs 08:24:02 policy-apex-pdp | sasl.oauthbearer.expected.audience = null 08:24:02 kafka | [2024-04-26 08:21:35,637] INFO Connecting to zookeeper on zookeeper:2181 (kafka.server.KafkaServer) 08:24:02 zookeeper | ===> Running preflight checks ... 08:24:02 mariadb | 2024-04-26 8:21:28 0 [Note] InnoDB: Completed initialization of buffer pool 08:24:02 mariadb | 2024-04-26 8:21:28 0 [Note] InnoDB: File system buffers for log disabled (block size=512 bytes) 08:24:02 mariadb | 2024-04-26 8:21:28 0 [Note] InnoDB: 128 rollback segments are active. 08:24:02 prometheus | ts=2024-04-26T08:21:32.336Z caller=main.go:624 level=info fd_limits="(soft=1048576, hard=1048576)" 08:24:02 policy-db-migrator | 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.18153631Z level=info msg="Executing migration" id="Add index for dashboard_is_folder" 08:24:02 policy-apex-pdp | sasl.oauthbearer.expected.issuer = null 08:24:02 kafka | [2024-04-26 08:21:35,655] INFO [ZooKeeperClient Kafka server] Initializing a new session to zookeeper:2181. (kafka.zookeeper.ZooKeeperClient) 08:24:02 zookeeper | ===> Check if /var/lib/zookeeper/data is writable ... 08:24:02 mariadb | 2024-04-26 8:21:28 0 [Note] InnoDB: Setting file './ibtmp1' size to 12.000MiB. Physically writing the file full; Please wait ... 08:24:02 mariadb | 2024-04-26 8:21:28 0 [Note] InnoDB: File './ibtmp1' size is now 12.000MiB. 08:24:02 mariadb | 2024-04-26 8:21:28 0 [Note] InnoDB: log sequence number 45452; transaction id 14 08:24:02 prometheus | ts=2024-04-26T08:21:32.336Z caller=main.go:625 level=info vm_limits="(soft=unlimited, hard=unlimited)" 08:24:02 policy-db-migrator | 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.18269694Z level=info msg="Migration successfully executed" id="Add index for dashboard_is_folder" duration=1.156669ms 08:24:02 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 08:24:02 kafka | [2024-04-26 08:21:35,660] INFO Client environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.ZooKeeper) 08:24:02 zookeeper | ===> Check if /var/lib/zookeeper/log is writable ... 08:24:02 mariadb | 2024-04-26 8:21:28 0 [Note] Plugin 'FEEDBACK' is disabled. 08:24:02 mariadb | 2024-04-26 8:21:28 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. 08:24:02 mariadb | 2024-04-26 8:21:28 0 [Warning] 'user' entry 'root@mariadb' ignored in --skip-name-resolve mode. 08:24:02 prometheus | ts=2024-04-26T08:21:32.341Z caller=web.go:568 level=info component=web msg="Start listening for connections" address=0.0.0.0:9090 08:24:02 policy-db-migrator | > upgrade 0270-jpatoscanodetype_properties.sql 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.186407412Z level=info msg="Executing migration" id="Add isPublic for dashboard" 08:24:02 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 08:24:02 kafka | [2024-04-26 08:21:35,660] INFO Client environment:host.name=7c7374bf05f8 (org.apache.zookeeper.ZooKeeper) 08:24:02 zookeeper | ===> Launching ... 08:24:02 mariadb | 2024-04-26 8:21:28 0 [Warning] 'proxies_priv' entry '@% root@mariadb' ignored in --skip-name-resolve mode. 08:24:02 mariadb | 2024-04-26 8:21:28 0 [Note] mariadbd: ready for connections. 08:24:02 mariadb | Version: '10.10.2-MariaDB-1:10.10.2+maria~ubu2204' socket: '/run/mysqld/mysqld.sock' port: 0 mariadb.org binary distribution 08:24:02 prometheus | ts=2024-04-26T08:21:32.342Z caller=main.go:1129 level=info msg="Starting TSDB ..." 08:24:02 policy-db-migrator | -------------- 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.191278393Z level=info msg="Migration successfully executed" id="Add isPublic for dashboard" duration=4.869141ms 08:24:02 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 08:24:02 kafka | [2024-04-26 08:21:35,661] INFO Client environment:java.version=11.0.22 (org.apache.zookeeper.ZooKeeper) 08:24:02 zookeeper | ===> Launching zookeeper ... 08:24:02 simulator | Policy simulator config file: /opt/app/policy/simulators/etc/mounted/simParameters.json 08:24:02 mariadb | 2024-04-26 08:21:29+00:00 [Note] [Entrypoint]: Temporary server started. 08:24:02 mariadb | 2024-04-26 08:21:31+00:00 [Note] [Entrypoint]: Creating user policy_user 08:24:02 prometheus | ts=2024-04-26T08:21:32.343Z caller=tls_config.go:313 level=info component=web msg="Listening on" address=[::]:9090 08:24:02 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.202957715Z level=info msg="Executing migration" id="create data_source table" 08:24:02 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null 08:24:02 kafka | [2024-04-26 08:21:35,661] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) 08:24:02 zookeeper | [2024-04-26 08:21:32,740] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 08:24:02 simulator | overriding logback.xml 08:24:02 simulator | 2024-04-26 08:21:34,931 INFO replacing 'HOST_NAME' with simulator in /opt/app/policy/simulators/etc/mounted/simParameters.json 08:24:02 mariadb | 2024-04-26 08:21:31+00:00 [Note] [Entrypoint]: Securing system users (equivalent to running mysql_secure_installation) 08:24:02 prometheus | ts=2024-04-26T08:21:32.343Z caller=tls_config.go:316 level=info component=web msg="TLS is disabled." http2=false address=[::]:9090 08:24:02 policy-db-migrator | -------------- 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.20458319Z level=info msg="Migration successfully executed" id="create data_source table" duration=1.623205ms 08:24:02 policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope 08:24:02 kafka | [2024-04-26 08:21:35,661] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) 08:24:02 zookeeper | [2024-04-26 08:21:32,746] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 08:24:02 policy-pap | Waiting for mariadb port 3306... 08:24:02 simulator | 2024-04-26 08:21:35,035 INFO org.onap.policy.models.simulators starting 08:24:02 mariadb | 08:24:02 prometheus | ts=2024-04-26T08:21:32.346Z caller=head.go:616 level=info component=tsdb msg="Replaying on-disk memory mappable chunks if any" 08:24:02 policy-db-migrator | 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.208950455Z level=info msg="Executing migration" id="add index data_source.account_id" 08:24:02 policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub 08:24:02 kafka | [2024-04-26 08:21:35,661] INFO Client environment:java.class.path=/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/connect-json-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/commons-validator-1.7.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.100.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/commons-io-2.11.0.jar:/usr/bin/../share/java/kafka/kafka-clients-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.11.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.100.Final.jar:/usr/bin/../share/java/kafka/connect-runtime-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/trogdor-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/connect-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/kafka-streams-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.4.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.100.Final.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/kafka-storage-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.5-1.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/scala-library-2.13.11.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/connect-transforms-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jose4j-0.9.4.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/reflections-0.10.2.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/commons-beanutils-1.9.4.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/commons-digester-2.1.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/connect-mirror-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-raft-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.8.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.4.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka-tools-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/checker-qual-3.19.0.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/pcollections-4.0.1.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/jsr305-3.0.2.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.100.Final.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.100.Final.jar:/usr/bin/../share/java/kafka/commons-collections-3.2.2.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/caffeine-2.9.3.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.1.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/netty-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jline-3.25.1.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/kafka-shell-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.100.Final.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.ZooKeeper) 08:24:02 zookeeper | [2024-04-26 08:21:32,746] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 08:24:02 policy-pap | mariadb (172.17.0.2:3306) open 08:24:02 simulator | 2024-04-26 08:21:35,036 INFO org.onap.policy.models.simulators starting CDS gRPC Server Properties 08:24:02 mariadb | 2024-04-26 08:21:31+00:00 [Warn] [Entrypoint]: /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/db.conf 08:24:02 prometheus | ts=2024-04-26T08:21:32.346Z caller=head.go:698 level=info component=tsdb msg="On-disk memory mappable chunks replay completed" duration=2.2µs 08:24:02 policy-db-migrator | 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.210523186Z level=info msg="Migration successfully executed" id="add index data_source.account_id" duration=1.571111ms 08:24:02 policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null 08:24:02 kafka | [2024-04-26 08:21:35,661] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) 08:24:02 zookeeper | [2024-04-26 08:21:32,746] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 08:24:02 policy-pap | Waiting for kafka port 9092... 08:24:02 simulator | 2024-04-26 08:21:35,310 INFO org.onap.policy.models.simulators starting org.onap.policy.simulators.AaiSimulatorJaxRs_RESOURCE_LOCATION 08:24:02 mariadb | 08:24:02 prometheus | ts=2024-04-26T08:21:32.346Z caller=head.go:706 level=info component=tsdb msg="Replaying WAL, this may take a while" 08:24:02 policy-db-migrator | > upgrade 0280-jpatoscapolicy_metadata.sql 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.214199566Z level=info msg="Executing migration" id="add unique index data_source.account_id_name" 08:24:02 policy-apex-pdp | security.protocol = PLAINTEXT 08:24:02 zookeeper | [2024-04-26 08:21:32,746] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 08:24:02 policy-pap | kafka (172.17.0.8:9092) open 08:24:02 kafka | [2024-04-26 08:21:35,661] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) 08:24:02 simulator | 2024-04-26 08:21:35,315 INFO org.onap.policy.models.simulators starting A&AI simulator 08:24:02 mariadb | 2024-04-26 08:21:31+00:00 [Note] [Entrypoint]: /usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/db.sh 08:24:02 prometheus | ts=2024-04-26T08:21:32.346Z caller=head.go:778 level=info component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0 08:24:02 policy-db-migrator | -------------- 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.215160316Z level=info msg="Migration successfully executed" id="add unique index data_source.account_id_name" duration=960.471µs 08:24:02 policy-apex-pdp | security.providers = null 08:24:02 zookeeper | [2024-04-26 08:21:32,748] INFO autopurge.snapRetainCount set to 3 (org.apache.zookeeper.server.DatadirCleanupManager) 08:24:02 policy-pap | Waiting for api port 6969... 08:24:02 kafka | [2024-04-26 08:21:35,662] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) 08:24:02 simulator | 2024-04-26 08:21:35,436 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@64a8c844{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@3f6db3fb{/,null,STOPPED}, connector=A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 08:24:02 mariadb | #!/bin/bash -xv 08:24:02 prometheus | ts=2024-04-26T08:21:32.346Z caller=head.go:815 level=info component=tsdb msg="WAL replay completed" checkpoint_replay_duration=27.722µs wal_replay_duration=292.284µs wbl_replay_duration=250ns total_replay_duration=350.117µs 08:24:02 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.220641619Z level=info msg="Executing migration" id="drop index IDX_data_source_account_id - v1" 08:24:02 policy-apex-pdp | send.buffer.bytes = 131072 08:24:02 zookeeper | [2024-04-26 08:21:32,748] INFO autopurge.purgeInterval set to 0 (org.apache.zookeeper.server.DatadirCleanupManager) 08:24:02 policy-pap | api (172.17.0.7:6969) open 08:24:02 kafka | [2024-04-26 08:21:35,662] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) 08:24:02 simulator | 2024-04-26 08:21:35,448 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@64a8c844{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@3f6db3fb{/,null,STOPPED}, connector=A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 08:24:02 mariadb | # Copyright 2019,2021 AT&T Intellectual Property. All rights reserved 08:24:02 prometheus | ts=2024-04-26T08:21:32.351Z caller=main.go:1150 level=info fs_type=EXT4_SUPER_MAGIC 08:24:02 policy-db-migrator | -------------- 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.221514864Z level=info msg="Migration successfully executed" id="drop index IDX_data_source_account_id - v1" duration=873.755µs 08:24:02 policy-apex-pdp | session.timeout.ms = 45000 08:24:02 zookeeper | [2024-04-26 08:21:32,748] INFO Purge task is not scheduled. (org.apache.zookeeper.server.DatadirCleanupManager) 08:24:02 policy-pap | Policy pap config file: /opt/app/policy/pap/etc/papParameters.yaml 08:24:02 kafka | [2024-04-26 08:21:35,662] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) 08:24:02 simulator | 2024-04-26 08:21:35,451 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@64a8c844{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@3f6db3fb{/,null,STOPPED}, connector=A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=Thread[A&AI simulator-6666,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 08:24:02 mariadb | # Modifications Copyright (c) 2022 Nordix Foundation. 08:24:02 prometheus | ts=2024-04-26T08:21:32.351Z caller=main.go:1153 level=info msg="TSDB started" 08:24:02 policy-db-migrator | 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.225524751Z level=info msg="Executing migration" id="drop index UQE_data_source_account_id_name - v1" 08:24:02 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 08:24:02 zookeeper | [2024-04-26 08:21:32,748] WARN Either no config or no quorum defined in config, running in standalone mode (org.apache.zookeeper.server.quorum.QuorumPeerMain) 08:24:02 policy-pap | PDP group configuration file: /opt/app/policy/pap/etc/mounted/groups.json 08:24:02 kafka | [2024-04-26 08:21:35,662] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) 08:24:02 simulator | 2024-04-26 08:21:35,458 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.11+9-alpine-r0 08:24:02 mariadb | # 08:24:02 prometheus | ts=2024-04-26T08:21:32.352Z caller=main.go:1335 level=info msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml 08:24:02 policy-db-migrator | 08:24:02 policy-db-migrator | > upgrade 0290-jpatoscapolicy_properties.sql 08:24:02 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 08:24:02 policy-apex-pdp | ssl.cipher.suites = null 08:24:02 policy-pap | 08:24:02 kafka | [2024-04-26 08:21:35,662] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) 08:24:02 simulator | 2024-04-26 08:21:35,551 INFO Session workerName=node0 08:24:02 mariadb | # Licensed under the Apache License, Version 2.0 (the "License"); 08:24:02 prometheus | ts=2024-04-26T08:21:32.354Z caller=main.go:1372 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=1.83986ms db_storage=1.79µs remote_storage=2.35µs web_handler=870ns query_engine=1.3µs scrape=516.026µs scrape_sd=261.273µs notify=41.052µs notify_sd=21.161µs rules=2.4µs tracing=9.5µs 08:24:02 policy-db-migrator | -------------- 08:24:02 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 08:24:02 policy-pap | . ____ _ __ _ _ 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.226405806Z level=info msg="Migration successfully executed" id="drop index UQE_data_source_account_id_name - v1" duration=876.455µs 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.229338907Z level=info msg="Executing migration" id="Rename table data_source to data_source_v1 - v1" 08:24:02 kafka | [2024-04-26 08:21:35,662] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) 08:24:02 simulator | 2024-04-26 08:21:36,115 INFO Using GSON for REST calls 08:24:02 mariadb | # you may not use this file except in compliance with the License. 08:24:02 prometheus | ts=2024-04-26T08:21:32.354Z caller=manager.go:163 level=info component="rule manager" msg="Starting rule manager..." 08:24:02 policy-db-migrator | -------------- 08:24:02 policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 08:24:02 policy-pap | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.236112347Z level=info msg="Migration successfully executed" id="Rename table data_source to data_source_v1 - v1" duration=6.77258ms 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.242213361Z level=info msg="Executing migration" id="create data_source table v2" 08:24:02 kafka | [2024-04-26 08:21:35,662] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) 08:24:02 simulator | 2024-04-26 08:21:36,210 INFO Started o.e.j.s.ServletContextHandler@3f6db3fb{/,null,AVAILABLE} 08:24:02 mariadb | # You may obtain a copy of the License at 08:24:02 prometheus | ts=2024-04-26T08:21:32.354Z caller=main.go:1114 level=info msg="Server is ready to receive web requests." 08:24:02 policy-db-migrator | 08:24:02 policy-apex-pdp | ssl.endpoint.identification.algorithm = https 08:24:02 policy-pap | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.243786753Z level=info msg="Migration successfully executed" id="create data_source table v2" duration=1.573672ms 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.249036284Z level=info msg="Executing migration" id="create index IDX_data_source_org_id - v2" 08:24:02 kafka | [2024-04-26 08:21:35,663] INFO Client environment:os.memory.free=1008MB (org.apache.zookeeper.ZooKeeper) 08:24:02 simulator | 2024-04-26 08:21:36,218 INFO Started A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666} 08:24:02 mariadb | # 08:24:02 policy-db-migrator | 08:24:02 policy-apex-pdp | ssl.engine.factory.class = null 08:24:02 policy-pap | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.250052637Z level=info msg="Migration successfully executed" id="create index IDX_data_source_org_id - v2" duration=1.017153ms 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.253984739Z level=info msg="Executing migration" id="create index UQE_data_source_org_id_name - v2" 08:24:02 kafka | [2024-04-26 08:21:35,663] INFO Client environment:os.memory.max=1024MB (org.apache.zookeeper.ZooKeeper) 08:24:02 simulator | 2024-04-26 08:21:36,227 INFO Started Server@64a8c844{STARTING}[11.0.20,sto=0] @1945ms 08:24:02 mariadb | # http://www.apache.org/licenses/LICENSE-2.0 08:24:02 policy-db-migrator | > upgrade 0300-jpatoscapolicy_targets.sql 08:24:02 policy-apex-pdp | ssl.key.password = null 08:24:02 policy-pap | ' |____| .__|_| |_|_| |_\__, | / / / / 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.259253532Z level=info msg="Migration successfully executed" id="create index UQE_data_source_org_id_name - v2" duration=5.261732ms 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.262320039Z level=info msg="Executing migration" id="Drop old table data_source_v1 #2" 08:24:02 kafka | [2024-04-26 08:21:35,663] INFO Client environment:os.memory.total=1024MB (org.apache.zookeeper.ZooKeeper) 08:24:02 simulator | 2024-04-26 08:21:36,228 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@64a8c844{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@3f6db3fb{/,null,AVAILABLE}, connector=A&AI simulator@6f152006{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=Thread[A&AI simulator-6666,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-2e61d218==org.glassfish.jersey.servlet.ServletContainer@60d118f1{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4223 ms. 08:24:02 mariadb | # 08:24:02 policy-db-migrator | -------------- 08:24:02 policy-apex-pdp | ssl.keymanager.algorithm = SunX509 08:24:02 policy-pap | =========|_|==============|___/=/_/_/_/ 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.262860758Z level=info msg="Migration successfully executed" id="Drop old table data_source_v1 #2" duration=540.648µs 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.265923785Z level=info msg="Executing migration" id="Add column with_credentials" 08:24:02 kafka | [2024-04-26 08:21:35,666] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=18000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@447a020 (org.apache.zookeeper.ZooKeeper) 08:24:02 simulator | 2024-04-26 08:21:36,241 INFO org.onap.policy.models.simulators starting SDNC simulator 08:24:02 mariadb | # Unless required by applicable law or agreed to in writing, software 08:24:02 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_targets (name VARCHAR(120) NULL, version VARCHAR(20) NULL) 08:24:02 policy-apex-pdp | ssl.keystore.certificate.chain = null 08:24:02 policy-pap | :: Spring Boot :: (v3.1.10) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.267772421Z level=info msg="Migration successfully executed" id="Add column with_credentials" duration=1.843866ms 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.271871703Z level=info msg="Executing migration" id="Add secure json data column" 08:24:02 kafka | [2024-04-26 08:21:35,670] INFO jute.maxbuffer value is 4194304 Bytes (org.apache.zookeeper.ClientCnxnSocket) 08:24:02 simulator | 2024-04-26 08:21:36,249 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@70efb718{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b70da4c{/,null,STOPPED}, connector=SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 08:24:02 mariadb | # distributed under the License is distributed on an "AS IS" BASIS, 08:24:02 policy-db-migrator | -------------- 08:24:02 policy-apex-pdp | ssl.keystore.key = null 08:24:02 policy-pap | 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.273538628Z level=info msg="Migration successfully executed" id="Add secure json data column" duration=1.666105ms 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.280012132Z level=info msg="Executing migration" id="Update data_source table charset" 08:24:02 kafka | [2024-04-26 08:21:35,676] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) 08:24:02 simulator | 2024-04-26 08:21:36,250 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@70efb718{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b70da4c{/,null,STOPPED}, connector=SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 08:24:02 mariadb | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 08:24:02 policy-db-migrator | 08:24:02 policy-apex-pdp | ssl.keystore.location = null 08:24:02 policy-pap | [2024-04-26T08:21:53.911+00:00|INFO|Version|background-preinit] HV000001: Hibernate Validator 8.0.1.Final 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.280049084Z level=info msg="Migration successfully executed" id="Update data_source table charset" duration=38.592µs 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.283169876Z level=info msg="Executing migration" id="Update initial version to 1" 08:24:02 kafka | [2024-04-26 08:21:35,681] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient) 08:24:02 simulator | 2024-04-26 08:21:36,251 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@70efb718{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b70da4c{/,null,STOPPED}, connector=SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=Thread[SDNC simulator-6668,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 08:24:02 mariadb | # See the License for the specific language governing permissions and 08:24:02 policy-db-migrator | 08:24:02 policy-apex-pdp | ssl.keystore.password = null 08:24:02 policy-pap | [2024-04-26T08:21:53.970+00:00|INFO|PolicyPapApplication|main] Starting PolicyPapApplication using Java 17.0.11 with PID 34 (/app/pap.jar started by policy in /opt/app/policy/pap/bin) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.283343754Z level=info msg="Migration successfully executed" id="Update initial version to 1" duration=174.638µs 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.285563779Z level=info msg="Executing migration" id="Add read_only data column" 08:24:02 kafka | [2024-04-26 08:21:35,685] INFO Opening socket connection to server zookeeper/172.17.0.4:2181. (org.apache.zookeeper.ClientCnxn) 08:24:02 simulator | 2024-04-26 08:21:36,252 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.11+9-alpine-r0 08:24:02 mariadb | # limitations under the License. 08:24:02 policy-db-migrator | > upgrade 0310-jpatoscapolicytype_metadata.sql 08:24:02 policy-apex-pdp | ssl.keystore.type = JKS 08:24:02 policy-pap | [2024-04-26T08:21:53.971+00:00|INFO|PolicyPapApplication|main] No active profile set, falling back to 1 default profile: "default" 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.288730453Z level=info msg="Migration successfully executed" id="Add read_only data column" duration=3.162574ms 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.297110915Z level=info msg="Executing migration" id="Migrate logging ds to loki ds" 08:24:02 kafka | [2024-04-26 08:21:35,691] INFO Socket connection established, initiating session, client: /172.17.0.8:53476, server: zookeeper/172.17.0.4:2181 (org.apache.zookeeper.ClientCnxn) 08:24:02 simulator | 2024-04-26 08:21:36,256 INFO Session workerName=node0 08:24:02 mariadb | 08:24:02 policy-db-migrator | -------------- 08:24:02 policy-apex-pdp | ssl.protocol = TLSv1.3 08:24:02 policy-pap | [2024-04-26T08:21:55.909+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.297446972Z level=info msg="Migration successfully executed" id="Migrate logging ds to loki ds" duration=334.567µs 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.303423131Z level=info msg="Executing migration" id="Update json_data with nulls" 08:24:02 kafka | [2024-04-26 08:21:35,700] INFO Session establishment complete on server zookeeper/172.17.0.4:2181, session id = 0x1000003a6a90001, negotiated timeout = 18000 (org.apache.zookeeper.ClientCnxn) 08:24:02 simulator | 2024-04-26 08:21:36,329 INFO Using GSON for REST calls 08:24:02 mariadb | for db in migration pooling policyadmin operationshistory clampacm policyclamp 08:24:02 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 08:24:02 policy-apex-pdp | ssl.provider = null 08:24:02 policy-pap | [2024-04-26T08:21:55.997+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 79 ms. Found 7 JPA repository interfaces. 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.303659173Z level=info msg="Migration successfully executed" id="Update json_data with nulls" duration=235.522µs 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.306145942Z level=info msg="Executing migration" id="Add uid column" 08:24:02 kafka | [2024-04-26 08:21:35,704] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient) 08:24:02 mariadb | do 08:24:02 policy-db-migrator | -------------- 08:24:02 policy-apex-pdp | ssl.secure.random.implementation = null 08:24:02 policy-pap | [2024-04-26T08:21:56.437+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.exception.ServiceExceptionHandler 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.31058208Z level=info msg="Migration successfully executed" id="Add uid column" duration=4.435728ms 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.316632893Z level=info msg="Executing migration" id="Update uid value" 08:24:02 kafka | [2024-04-26 08:21:36,015] INFO Cluster ID = qUquThiHQAKlsircSK68zw (kafka.server.KafkaServer) 08:24:02 simulator | 2024-04-26 08:21:36,339 INFO Started o.e.j.s.ServletContextHandler@b70da4c{/,null,AVAILABLE} 08:24:02 mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "CREATE DATABASE IF NOT EXISTS ${db};" 08:24:02 policy-db-migrator | 08:24:02 policy-apex-pdp | ssl.trustmanager.algorithm = PKIX 08:24:02 policy-pap | [2024-04-26T08:21:56.438+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.exception.ServiceExceptionHandler 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.316837503Z level=info msg="Migration successfully executed" id="Update uid value" duration=204.86µs 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.319691021Z level=info msg="Executing migration" id="Add unique index datasource_org_id_uid" 08:24:02 kafka | [2024-04-26 08:21:36,019] WARN No meta.properties file under dir /var/lib/kafka/data/meta.properties (kafka.server.BrokerMetadataCheckpoint) 08:24:02 simulator | 2024-04-26 08:21:36,342 INFO Started SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668} 08:24:02 mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "GRANT ALL PRIVILEGES ON \`${db}\`.* TO '${MYSQL_USER}'@'%' ;" 08:24:02 policy-db-migrator | 08:24:02 policy-apex-pdp | ssl.truststore.certificates = null 08:24:02 policy-pap | [2024-04-26T08:21:57.039+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port(s): 6969 (http) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.321247441Z level=info msg="Migration successfully executed" id="Add unique index datasource_org_id_uid" duration=1.55435ms 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.325785135Z level=info msg="Executing migration" id="add unique index datasource_org_id_is_default" 08:24:02 kafka | [2024-04-26 08:21:36,091] INFO KafkaConfig values: 08:24:02 simulator | 2024-04-26 08:21:36,342 INFO Started Server@70efb718{STARTING}[11.0.20,sto=0] @2060ms 08:24:02 mariadb | done 08:24:02 policy-db-migrator | > upgrade 0320-jpatoscapolicytype_properties.sql 08:24:02 policy-apex-pdp | ssl.truststore.location = null 08:24:02 policy-pap | [2024-04-26T08:21:57.048+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.327766688Z level=info msg="Migration successfully executed" id="add unique index datasource_org_id_is_default" duration=1.984863ms 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.342762741Z level=info msg="Executing migration" id="create api_key table" 08:24:02 kafka | advertised.listeners = PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092 08:24:02 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 08:24:02 simulator | 2024-04-26 08:21:36,342 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@70efb718{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b70da4c{/,null,AVAILABLE}, connector=SDNC simulator@c5ee75e{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=Thread[SDNC simulator-6668,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-bf1ec20==org.glassfish.jersey.servlet.ServletContainer@636d8afb{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4909 ms. 08:24:02 simulator | 2024-04-26 08:21:36,366 INFO org.onap.policy.models.simulators starting SO simulator 08:24:02 policy-db-migrator | -------------- 08:24:02 policy-apex-pdp | ssl.truststore.password = null 08:24:02 policy-pap | [2024-04-26T08:21:57.051+00:00|INFO|StandardService|main] Starting service [Tomcat] 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.344224647Z level=info msg="Migration successfully executed" id="create api_key table" duration=1.460446ms 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.348641924Z level=info msg="Executing migration" id="add index api_key.account_id" 08:24:02 kafka | alter.config.policy.class.name = null 08:24:02 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS migration;' 08:24:02 simulator | 2024-04-26 08:21:36,368 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@b7838a9{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@696f0212{/,null,STOPPED}, connector=SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 08:24:02 simulator | 2024-04-26 08:21:36,369 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@b7838a9{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@696f0212{/,null,STOPPED}, connector=SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 08:24:02 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 08:24:02 policy-apex-pdp | ssl.truststore.type = JKS 08:24:02 policy-pap | [2024-04-26T08:21:57.051+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.19] 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.350395295Z level=info msg="Migration successfully executed" id="add index api_key.account_id" duration=1.752171ms 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.356694311Z level=info msg="Executing migration" id="add index api_key.key" 08:24:02 kafka | alter.log.dirs.replication.quota.window.num = 11 08:24:02 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `migration`.* TO '\''policy_user'\''@'\''%'\'' ;' 08:24:02 simulator | 2024-04-26 08:21:36,371 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@b7838a9{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@696f0212{/,null,STOPPED}, connector=SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=Thread[SO simulator-6669,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 08:24:02 simulator | 2024-04-26 08:21:36,371 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.11+9-alpine-r0 08:24:02 policy-db-migrator | -------------- 08:24:02 policy-apex-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 08:24:02 policy-pap | [2024-04-26T08:21:57.148+00:00|INFO|[/policy/pap/v1]|main] Initializing Spring embedded WebApplicationContext 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.35766925Z level=info msg="Migration successfully executed" id="add index api_key.key" duration=975.01µs 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.368199124Z level=info msg="Executing migration" id="add index api_key.account_id_name" 08:24:02 kafka | alter.log.dirs.replication.quota.window.size.seconds = 1 08:24:02 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 08:24:02 simulator | 2024-04-26 08:21:36,379 INFO Session workerName=node0 08:24:02 policy-db-migrator | 08:24:02 policy-apex-pdp | 08:24:02 policy-pap | [2024-04-26T08:21:57.148+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 3105 ms 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.369257419Z level=info msg="Migration successfully executed" id="add index api_key.account_id_name" duration=1.057955ms 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.376324344Z level=info msg="Executing migration" id="drop index IDX_api_key_account_id - v1" 08:24:02 kafka | authorizer.class.name = 08:24:02 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS pooling;' 08:24:02 policy-db-migrator | 08:24:02 policy-apex-pdp | [2024-04-26T08:22:04.643+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 08:24:02 policy-pap | [2024-04-26T08:21:57.537+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.377927306Z level=info msg="Migration successfully executed" id="drop index IDX_api_key_account_id - v1" duration=1.602472ms 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.383371267Z level=info msg="Executing migration" id="drop index UQE_api_key_key - v1" 08:24:02 kafka | auto.create.topics.enable = true 08:24:02 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `pooling`.* TO '\''policy_user'\''@'\''%'\'' ;' 08:24:02 policy-db-migrator | > upgrade 0330-jpatoscapolicytype_targets.sql 08:24:02 policy-apex-pdp | [2024-04-26T08:22:04.644+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 08:24:02 policy-pap | [2024-04-26T08:21:57.588+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 5.6.15.Final 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.383938407Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_key - v1" duration=571.04µs 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.387712931Z level=info msg="Executing migration" id="drop index UQE_api_key_account_id_name - v1" 08:24:02 kafka | auto.include.jmx.reporter = true 08:24:02 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 08:24:02 policy-db-migrator | -------------- 08:24:02 policy-apex-pdp | [2024-04-26T08:22:04.644+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1714119724643 08:24:02 policy-pap | [2024-04-26T08:21:57.928+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.388565095Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_account_id_name - v1" duration=851.434µs 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.391441474Z level=info msg="Executing migration" id="Rename table api_key to api_key_v1 - v1" 08:24:02 kafka | auto.leader.rebalance.enable = true 08:24:02 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS policyadmin;' 08:24:02 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_targets (name VARCHAR(120) NULL, version VARCHAR(20) NULL) 08:24:02 policy-apex-pdp | [2024-04-26T08:22:04.644+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-385d2de3-e329-4c2e-8254-58c110e4f277-2, groupId=385d2de3-e329-4c2e-8254-58c110e4f277] Subscribed to topic(s): policy-pdp-pap 08:24:02 policy-pap | [2024-04-26T08:21:58.024+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.mariadb.jdbc.Connection@51288417 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.402531646Z level=info msg="Migration successfully executed" id="Rename table api_key to api_key_v1 - v1" duration=11.089822ms 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.40513428Z level=info msg="Executing migration" id="create api_key table v2" 08:24:02 kafka | background.threads = 10 08:24:02 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `policyadmin`.* TO '\''policy_user'\''@'\''%'\'' ;' 08:24:02 policy-db-migrator | -------------- 08:24:02 policy-apex-pdp | [2024-04-26T08:22:04.644+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=41465092-4801-404b-834e-cb5739a089eb, alive=false, publisher=null]]: starting 08:24:02 policy-pap | [2024-04-26T08:21:58.025+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.405772874Z level=info msg="Migration successfully executed" id="create api_key table v2" duration=638.013µs 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.416522648Z level=info msg="Executing migration" id="create index IDX_api_key_org_id - v2" 08:24:02 kafka | broker.heartbeat.interval.ms = 2000 08:24:02 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 08:24:02 policy-db-migrator | 08:24:02 policy-apex-pdp | [2024-04-26T08:22:04.655+00:00|INFO|ProducerConfig|main] ProducerConfig values: 08:24:02 policy-pap | [2024-04-26T08:21:58.052+00:00|INFO|Dialect|main] HHH000400: Using dialect: org.hibernate.dialect.MariaDB106Dialect 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.418572744Z level=info msg="Migration successfully executed" id="create index IDX_api_key_org_id - v2" duration=2.048776ms 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.42313171Z level=info msg="Executing migration" id="create index UQE_api_key_key - v2" 08:24:02 kafka | broker.id = 1 08:24:02 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS operationshistory;' 08:24:02 simulator | 2024-04-26 08:21:36,455 INFO Using GSON for REST calls 08:24:02 policy-db-migrator | 08:24:02 policy-apex-pdp | acks = -1 08:24:02 policy-pap | [2024-04-26T08:21:59.507+00:00|INFO|JtaPlatformInitiator|main] HHH000490: Using JtaPlatform implementation: [org.hibernate.engine.transaction.jta.platform.internal.NoJtaPlatform] 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.423777743Z level=info msg="Migration successfully executed" id="create index UQE_api_key_key - v2" duration=646.203µs 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.428517768Z level=info msg="Executing migration" id="create index UQE_api_key_org_id_name - v2" 08:24:02 kafka | broker.id.generation.enable = true 08:24:02 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `operationshistory`.* TO '\''policy_user'\''@'\''%'\'' ;' 08:24:02 simulator | 2024-04-26 08:21:36,468 INFO Started o.e.j.s.ServletContextHandler@696f0212{/,null,AVAILABLE} 08:24:02 policy-db-migrator | > upgrade 0340-jpatoscapolicytype_triggers.sql 08:24:02 policy-apex-pdp | auto.include.jmx.reporter = true 08:24:02 policy-pap | [2024-04-26T08:21:59.517+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.429842365Z level=info msg="Migration successfully executed" id="create index UQE_api_key_org_id_name - v2" duration=1.327167ms 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.433357647Z level=info msg="Executing migration" id="copy api_key v1 to v2" 08:24:02 kafka | broker.rack = null 08:24:02 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 08:24:02 simulator | 2024-04-26 08:21:36,470 INFO Started SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669} 08:24:02 policy-db-migrator | -------------- 08:24:02 policy-apex-pdp | batch.size = 16384 08:24:02 policy-pap | [2024-04-26T08:22:00.029+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PdpGroupRepository 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.43380459Z level=info msg="Migration successfully executed" id="copy api_key v1 to v2" duration=446.613µs 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.440085735Z level=info msg="Executing migration" id="Drop old table api_key_v1" 08:24:02 kafka | broker.session.timeout.ms = 9000 08:24:02 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS clampacm;' 08:24:02 simulator | 2024-04-26 08:21:36,471 INFO Started Server@b7838a9{STARTING}[11.0.20,sto=0] @2189ms 08:24:02 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_triggers (name VARCHAR(120) NULL, version VARCHAR(20) NULL, TRIGGERS VARCHAR(255) NULL) 08:24:02 policy-apex-pdp | bootstrap.servers = [kafka:9092] 08:24:02 policy-pap | [2024-04-26T08:22:00.423+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PolicyStatusRepository 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.440633083Z level=info msg="Migration successfully executed" id="Drop old table api_key_v1" duration=545.119µs 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.44445086Z level=info msg="Executing migration" id="Update api_key table charset" 08:24:02 kafka | client.quota.callback.class = null 08:24:02 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `clampacm`.* TO '\''policy_user'\''@'\''%'\'' ;' 08:24:02 simulator | 2024-04-26 08:21:36,471 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@b7838a9{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@696f0212{/,null,AVAILABLE}, connector=SO simulator@4e858e0a{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=Thread[SO simulator-6669,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-4e70a728==org.glassfish.jersey.servlet.ServletContainer@3238e994{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4899 ms. 08:24:02 policy-db-migrator | -------------- 08:24:02 policy-apex-pdp | buffer.memory = 33554432 08:24:02 policy-pap | [2024-04-26T08:22:00.536+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PolicyAuditRepository 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.444482551Z level=info msg="Migration successfully executed" id="Update api_key table charset" duration=32.612µs 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.447615043Z level=info msg="Executing migration" id="Add expires to api_key table" 08:24:02 kafka | compression.type = producer 08:24:02 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 08:24:02 simulator | 2024-04-26 08:21:36,472 INFO org.onap.policy.models.simulators starting VFC simulator 08:24:02 policy-db-migrator | 08:24:02 policy-apex-pdp | client.dns.lookup = use_all_dns_ips 08:24:02 policy-pap | [2024-04-26T08:22:00.853+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.453051744Z level=info msg="Migration successfully executed" id="Add expires to api_key table" duration=5.436061ms 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.457159615Z level=info msg="Executing migration" id="Add service account foreign key" 08:24:02 kafka | connection.failed.authentication.delay.ms = 100 08:24:02 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS policyclamp;' 08:24:02 simulator | 2024-04-26 08:21:36,475 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@f478a81{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@19553973{/,null,STOPPED}, connector=VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 08:24:02 policy-db-migrator | 08:24:02 policy-apex-pdp | client.id = producer-1 08:24:02 policy-pap | allow.auto.create.topics = true 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.459843104Z level=info msg="Migration successfully executed" id="Add service account foreign key" duration=2.683039ms 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.463091472Z level=info msg="Executing migration" id="set service account foreign key to nil if 0" 08:24:02 kafka | connections.max.idle.ms = 600000 08:24:02 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `policyclamp`.* TO '\''policy_user'\''@'\''%'\'' ;' 08:24:02 policy-db-migrator | > upgrade 0350-jpatoscaproperty_constraints.sql 08:24:02 policy-apex-pdp | compression.type = none 08:24:02 policy-pap | auto.commit.interval.ms = 5000 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.463265241Z level=info msg="Migration successfully executed" id="set service account foreign key to nil if 0" duration=173.789µs 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.466498278Z level=info msg="Executing migration" id="Add last_used_at to api_key table" 08:24:02 kafka | connections.max.reauth.ms = 0 08:24:02 mariadb | 08:24:02 policy-db-migrator | -------------- 08:24:02 policy-apex-pdp | connections.max.idle.ms = 540000 08:24:02 policy-pap | auto.include.jmx.reporter = true 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.468938713Z level=info msg="Migration successfully executed" id="Add last_used_at to api_key table" duration=2.440335ms 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.477165398Z level=info msg="Executing migration" id="Add is_revoked column to api_key table" 08:24:02 kafka | control.plane.listener.name = null 08:24:02 simulator | 2024-04-26 08:21:36,476 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@f478a81{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@19553973{/,null,STOPPED}, connector=VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 08:24:02 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaproperty_constraints (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, CONSTRAINTS VARCHAR(255) NULL) 08:24:02 mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "FLUSH PRIVILEGES;" 08:24:02 policy-apex-pdp | delivery.timeout.ms = 120000 08:24:02 policy-pap | auto.offset.reset = latest 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.479746152Z level=info msg="Migration successfully executed" id="Add is_revoked column to api_key table" duration=2.578324ms 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.483866685Z level=info msg="Executing migration" id="create dashboard_snapshot table v4" 08:24:02 kafka | controlled.shutdown.enable = true 08:24:02 policy-db-migrator | -------------- 08:24:02 mariadb | + mysql -uroot -psecret --execute 'FLUSH PRIVILEGES;' 08:24:02 policy-apex-pdp | enable.idempotence = true 08:24:02 policy-pap | bootstrap.servers = [kafka:9092] 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.484640044Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v4" duration=773.649µs 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.488396888Z level=info msg="Executing migration" id="drop table dashboard_snapshot_v4 #1" 08:24:02 kafka | controlled.shutdown.max.retries = 3 08:24:02 policy-db-migrator | 08:24:02 mariadb | mysql -u"${MYSQL_USER}" -p"${MYSQL_PASSWORD}" -f policyclamp < /tmp/policy-clamp-create-tables.sql 08:24:02 policy-apex-pdp | interceptor.classes = [] 08:24:02 policy-pap | check.crcs = true 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.488896344Z level=info msg="Migration successfully executed" id="drop table dashboard_snapshot_v4 #1" duration=499.326µs 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.495097844Z level=info msg="Executing migration" id="create dashboard_snapshot table v5 #2" 08:24:02 kafka | controlled.shutdown.retry.backoff.ms = 5000 08:24:02 simulator | 2024-04-26 08:21:36,481 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@f478a81{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@19553973{/,null,STOPPED}, connector=VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=Thread[VFC simulator-6670,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 08:24:02 policy-db-migrator | 08:24:02 policy-apex-pdp | key.serializer = class org.apache.kafka.common.serialization.StringSerializer 08:24:02 policy-pap | client.dns.lookup = use_all_dns_ips 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.496449014Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v5 #2" duration=1.35839ms 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.501961568Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_key - v5" 08:24:02 kafka | controller.listener.names = null 08:24:02 simulator | 2024-04-26 08:21:36,482 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.11+9-alpine-r0 08:24:02 policy-db-migrator | > upgrade 0360-jpatoscaproperty_metadata.sql 08:24:02 mariadb | + mysql -upolicy_user -ppolicy_user -f policyclamp 08:24:02 policy-apex-pdp | linger.ms = 0 08:24:02 policy-pap | client.id = consumer-db954cd2-8764-4a44-90af-3bb7f2069f83-1 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.502778051Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_key - v5" duration=817.303µs 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.506779577Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_delete_key - v5" 08:24:02 kafka | controller.quorum.append.linger.ms = 25 08:24:02 simulator | 2024-04-26 08:21:36,510 INFO Session workerName=node0 08:24:02 policy-db-migrator | -------------- 08:24:02 mariadb | 08:24:02 policy-apex-pdp | max.block.ms = 60000 08:24:02 policy-pap | client.rack = 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.507388758Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_delete_key - v5" duration=609.352µs 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.515311037Z level=info msg="Executing migration" id="create index IDX_dashboard_snapshot_user_id - v5" 08:24:02 kafka | controller.quorum.election.backoff.max.ms = 1000 08:24:02 simulator | 2024-04-26 08:21:36,575 INFO Using GSON for REST calls 08:24:02 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaproperty_metadata (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 08:24:02 mariadb | 2024-04-26 08:21:32+00:00 [Note] [Entrypoint]: Stopping temporary server 08:24:02 policy-apex-pdp | max.in.flight.requests.per.connection = 5 08:24:02 policy-pap | connections.max.idle.ms = 540000 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.515887107Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_snapshot_user_id - v5" duration=577.449µs 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.52564842Z level=info msg="Executing migration" id="alter dashboard_snapshot to mediumtext v2" 08:24:02 kafka | controller.quorum.election.timeout.ms = 1000 08:24:02 simulator | 2024-04-26 08:21:36,588 INFO Started o.e.j.s.ServletContextHandler@19553973{/,null,AVAILABLE} 08:24:02 policy-db-migrator | -------------- 08:24:02 mariadb | 2024-04-26 8:21:32 0 [Note] mariadbd (initiated by: unknown): Normal shutdown 08:24:02 policy-apex-pdp | max.request.size = 1048576 08:24:02 policy-pap | default.api.timeout.ms = 60000 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.525729565Z level=info msg="Migration successfully executed" id="alter dashboard_snapshot to mediumtext v2" duration=79.244µs 08:24:02 zookeeper | [2024-04-26 08:21:32,749] INFO Log4j 1.2 jmx support not found; jmx disabled. (org.apache.zookeeper.jmx.ManagedUtil) 08:24:02 kafka | controller.quorum.fetch.timeout.ms = 2000 08:24:02 simulator | 2024-04-26 08:21:36,590 INFO Started VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670} 08:24:02 policy-db-migrator | 08:24:02 mariadb | 2024-04-26 8:21:32 0 [Note] InnoDB: FTS optimize thread exiting. 08:24:02 policy-apex-pdp | metadata.max.age.ms = 300000 08:24:02 policy-pap | enable.auto.commit = true 08:24:02 zookeeper | [2024-04-26 08:21:32,749] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.529023065Z level=info msg="Executing migration" id="Update dashboard_snapshot table charset" 08:24:02 kafka | controller.quorum.request.timeout.ms = 2000 08:24:02 simulator | 2024-04-26 08:21:36,591 INFO Started Server@f478a81{STARTING}[11.0.20,sto=0] @2309ms 08:24:02 policy-db-migrator | 08:24:02 mariadb | 2024-04-26 8:21:32 0 [Note] InnoDB: Starting shutdown... 08:24:02 policy-apex-pdp | metadata.max.idle.ms = 300000 08:24:02 policy-pap | exclude.internal.topics = true 08:24:02 zookeeper | [2024-04-26 08:21:32,750] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.529043506Z level=info msg="Migration successfully executed" id="Update dashboard_snapshot table charset" duration=21.031µs 08:24:02 kafka | controller.quorum.retry.backoff.ms = 20 08:24:02 policy-db-migrator | > upgrade 0370-jpatoscarelationshiptype_metadata.sql 08:24:02 simulator | 2024-04-26 08:21:36,592 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@f478a81{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@19553973{/,null,AVAILABLE}, connector=VFC simulator@1c3146bc{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=Thread[VFC simulator-6670,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-47a86fbb==org.glassfish.jersey.servlet.ServletContainer@f8e3b478{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4885 ms. 08:24:02 mariadb | 2024-04-26 8:21:32 0 [Note] InnoDB: Dumping buffer pool(s) to /var/lib/mysql/ib_buffer_pool 08:24:02 policy-apex-pdp | metric.reporters = [] 08:24:02 policy-pap | fetch.max.bytes = 52428800 08:24:02 zookeeper | [2024-04-26 08:21:32,750] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 08:24:02 kafka | controller.quorum.voters = [] 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.532235291Z level=info msg="Executing migration" id="Add column external_delete_url to dashboard_snapshots table" 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.534981963Z level=info msg="Migration successfully executed" id="Add column external_delete_url to dashboard_snapshots table" duration=2.746302ms 08:24:02 simulator | 2024-04-26 08:21:36,594 INFO org.onap.policy.models.simulators started 08:24:02 mariadb | 2024-04-26 8:21:32 0 [Note] InnoDB: Buffer pool(s) dump completed at 240426 8:21:32 08:24:02 policy-apex-pdp | metrics.num.samples = 2 08:24:02 policy-pap | fetch.max.wait.ms = 500 08:24:02 zookeeper | [2024-04-26 08:21:32,750] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 08:24:02 kafka | controller.quota.window.num = 11 08:24:02 policy-db-migrator | -------------- 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.539742858Z level=info msg="Executing migration" id="Add encrypted dashboard json column" 08:24:02 mariadb | 2024-04-26 8:21:32 0 [Note] InnoDB: Removed temporary tablespace data file: "./ibtmp1" 08:24:02 policy-apex-pdp | metrics.recording.level = INFO 08:24:02 policy-pap | fetch.min.bytes = 1 08:24:02 zookeeper | [2024-04-26 08:21:32,750] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 08:24:02 kafka | controller.quota.window.size.seconds = 1 08:24:02 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarelationshiptype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.545040212Z level=info msg="Migration successfully executed" id="Add encrypted dashboard json column" duration=5.280813ms 08:24:02 mariadb | 2024-04-26 8:21:32 0 [Note] InnoDB: Shutdown completed; log sequence number 347307; transaction id 298 08:24:02 policy-apex-pdp | metrics.sample.window.ms = 30000 08:24:02 policy-pap | group.id = db954cd2-8764-4a44-90af-3bb7f2069f83 08:24:02 zookeeper | [2024-04-26 08:21:32,750] INFO Starting server (org.apache.zookeeper.server.ZooKeeperServerMain) 08:24:02 kafka | controller.socket.timeout.ms = 30000 08:24:02 policy-db-migrator | -------------- 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.550068071Z level=info msg="Executing migration" id="Change dashboard_encrypted column to MEDIUMBLOB" 08:24:02 mariadb | 2024-04-26 8:21:32 0 [Note] mariadbd: Shutdown complete 08:24:02 policy-apex-pdp | partitioner.adaptive.partitioning.enable = true 08:24:02 policy-pap | group.instance.id = null 08:24:02 zookeeper | [2024-04-26 08:21:32,761] INFO ServerMetrics initialized with provider org.apache.zookeeper.metrics.impl.DefaultMetricsProvider@3246fb96 (org.apache.zookeeper.server.ServerMetrics) 08:24:02 kafka | create.topic.policy.class.name = null 08:24:02 policy-db-migrator | 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.550189297Z level=info msg="Migration successfully executed" id="Change dashboard_encrypted column to MEDIUMBLOB" duration=128.947µs 08:24:02 mariadb | 08:24:02 policy-apex-pdp | partitioner.availability.timeout.ms = 0 08:24:02 policy-pap | heartbeat.interval.ms = 3000 08:24:02 zookeeper | [2024-04-26 08:21:32,763] INFO ACL digest algorithm is: SHA1 (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) 08:24:02 kafka | default.replication.factor = 1 08:24:02 policy-db-migrator | 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.555733033Z level=info msg="Executing migration" id="create quota table v1" 08:24:02 mariadb | 2024-04-26 08:21:32+00:00 [Note] [Entrypoint]: Temporary server stopped 08:24:02 policy-apex-pdp | partitioner.class = null 08:24:02 policy-pap | interceptor.classes = [] 08:24:02 zookeeper | [2024-04-26 08:21:32,763] INFO zookeeper.DigestAuthenticationProvider.enabled = true (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) 08:24:02 kafka | delegation.token.expiry.check.interval.ms = 3600000 08:24:02 policy-db-migrator | > upgrade 0380-jpatoscarelationshiptype_properties.sql 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.556963347Z level=info msg="Migration successfully executed" id="create quota table v1" duration=1.231874ms 08:24:02 mariadb | 08:24:02 policy-apex-pdp | partitioner.ignore.keys = false 08:24:02 policy-pap | internal.leave.group.on.close = true 08:24:02 zookeeper | [2024-04-26 08:21:32,765] INFO zookeeper.snapshot.trust.empty : false (org.apache.zookeeper.server.persistence.FileTxnSnapLog) 08:24:02 kafka | delegation.token.expiry.time.ms = 86400000 08:24:02 policy-db-migrator | -------------- 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.565288737Z level=info msg="Executing migration" id="create index UQE_quota_org_id_user_id_target - v1" 08:24:02 mariadb | 2024-04-26 08:21:32+00:00 [Note] [Entrypoint]: MariaDB init process done. Ready for start up. 08:24:02 policy-apex-pdp | receive.buffer.bytes = 32768 08:24:02 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 08:24:02 zookeeper | [2024-04-26 08:21:32,774] INFO (org.apache.zookeeper.server.ZooKeeperServer) 08:24:02 kafka | delegation.token.master.key = null 08:24:02 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarelationshiptype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.56592234Z level=info msg="Migration successfully executed" id="create index UQE_quota_org_id_user_id_target - v1" duration=633.772µs 08:24:02 mariadb | 08:24:02 policy-apex-pdp | reconnect.backoff.max.ms = 1000 08:24:02 policy-pap | isolation.level = read_uncommitted 08:24:02 zookeeper | [2024-04-26 08:21:32,774] INFO ______ _ (org.apache.zookeeper.server.ZooKeeperServer) 08:24:02 kafka | delegation.token.max.lifetime.ms = 604800000 08:24:02 policy-db-migrator | -------------- 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.570814472Z level=info msg="Executing migration" id="Update quota table charset" 08:24:02 mariadb | 2024-04-26 8:21:32 0 [Note] mariadbd (server 10.10.2-MariaDB-1:10.10.2+maria~ubu2204) starting as process 1 ... 08:24:02 policy-apex-pdp | reconnect.backoff.ms = 50 08:24:02 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 08:24:02 zookeeper | [2024-04-26 08:21:32,774] INFO |___ / | | (org.apache.zookeeper.server.ZooKeeperServer) 08:24:02 kafka | delegation.token.secret.key = null 08:24:02 policy-db-migrator | 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.570836643Z level=info msg="Migration successfully executed" id="Update quota table charset" duration=23.241µs 08:24:02 mariadb | 2024-04-26 8:21:32 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 08:24:02 policy-apex-pdp | request.timeout.ms = 30000 08:24:02 policy-pap | max.partition.fetch.bytes = 1048576 08:24:02 zookeeper | [2024-04-26 08:21:32,774] INFO / / ___ ___ | | __ ___ ___ _ __ ___ _ __ (org.apache.zookeeper.server.ZooKeeperServer) 08:24:02 kafka | delete.records.purgatory.purge.interval.requests = 1 08:24:02 policy-db-migrator | 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.577454614Z level=info msg="Executing migration" id="create plugin_setting table" 08:24:02 mariadb | 2024-04-26 8:21:32 0 [Note] InnoDB: Number of transaction pools: 1 08:24:02 policy-apex-pdp | retries = 2147483647 08:24:02 policy-pap | max.poll.interval.ms = 300000 08:24:02 zookeeper | [2024-04-26 08:21:32,774] INFO / / / _ \ / _ \ | |/ / / _ \ / _ \ | '_ \ / _ \ | '__| (org.apache.zookeeper.server.ZooKeeperServer) 08:24:02 kafka | delete.topic.enable = true 08:24:02 policy-db-migrator | > upgrade 0390-jpatoscarequirement_metadata.sql 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.578312748Z level=info msg="Migration successfully executed" id="create plugin_setting table" duration=861.384µs 08:24:02 mariadb | 2024-04-26 8:21:32 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions 08:24:02 policy-apex-pdp | retry.backoff.ms = 100 08:24:02 policy-pap | max.poll.records = 500 08:24:02 zookeeper | [2024-04-26 08:21:32,774] INFO / /__ | (_) | | (_) | | < | __/ | __/ | |_) | | __/ | | (org.apache.zookeeper.server.ZooKeeperServer) 08:24:02 kafka | early.start.listeners = null 08:24:02 policy-db-migrator | -------------- 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.585434587Z level=info msg="Executing migration" id="create index UQE_plugin_setting_org_id_plugin_id - v1" 08:24:02 mariadb | 2024-04-26 8:21:32 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts) 08:24:02 policy-apex-pdp | sasl.client.callback.handler.class = null 08:24:02 policy-pap | metadata.max.age.ms = 300000 08:24:02 zookeeper | [2024-04-26 08:21:32,774] INFO /_____| \___/ \___/ |_|\_\ \___| \___| | .__/ \___| |_| (org.apache.zookeeper.server.ZooKeeperServer) 08:24:02 kafka | fetch.max.bytes = 57671680 08:24:02 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.586547334Z level=info msg="Migration successfully executed" id="create index UQE_plugin_setting_org_id_plugin_id - v1" duration=1.116137ms 08:24:02 mariadb | 2024-04-26 8:21:32 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) 08:24:02 policy-apex-pdp | sasl.jaas.config = null 08:24:02 policy-pap | metric.reporters = [] 08:24:02 zookeeper | [2024-04-26 08:21:32,774] INFO | | (org.apache.zookeeper.server.ZooKeeperServer) 08:24:02 kafka | fetch.purgatory.purge.interval.requests = 1000 08:24:02 policy-db-migrator | -------------- 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.592182554Z level=info msg="Executing migration" id="Add column plugin_version to plugin_settings" 08:24:02 mariadb | 2024-04-26 8:21:32 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF 08:24:02 policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit 08:24:02 policy-pap | metrics.num.samples = 2 08:24:02 zookeeper | [2024-04-26 08:21:32,774] INFO |_| (org.apache.zookeeper.server.ZooKeeperServer) 08:24:02 kafka | group.consumer.assignors = [org.apache.kafka.coordinator.group.assignor.RangeAssignor] 08:24:02 policy-db-migrator | 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.59501279Z level=info msg="Migration successfully executed" id="Add column plugin_version to plugin_settings" duration=2.836556ms 08:24:02 mariadb | 2024-04-26 8:21:32 0 [Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB 08:24:02 policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 08:24:02 policy-pap | metrics.recording.level = INFO 08:24:02 zookeeper | [2024-04-26 08:21:32,774] INFO (org.apache.zookeeper.server.ZooKeeperServer) 08:24:02 kafka | group.consumer.heartbeat.interval.ms = 5000 08:24:02 policy-db-migrator | 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.597932132Z level=info msg="Executing migration" id="Update plugin_setting table charset" 08:24:02 mariadb | 2024-04-26 8:21:32 0 [Note] InnoDB: Completed initialization of buffer pool 08:24:02 policy-apex-pdp | sasl.kerberos.service.name = null 08:24:02 policy-pap | metrics.sample.window.ms = 30000 08:24:02 zookeeper | [2024-04-26 08:21:32,775] INFO Server environment:zookeeper.version=3.8.4-9316c2a7a97e1666d8f4593f34dd6fc36ecc436c, built on 2024-02-12 22:16 UTC (org.apache.zookeeper.server.ZooKeeperServer) 08:24:02 kafka | group.consumer.max.heartbeat.interval.ms = 15000 08:24:02 policy-db-migrator | > upgrade 0400-jpatoscarequirement_occurrences.sql 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.597988165Z level=info msg="Migration successfully executed" id="Update plugin_setting table charset" duration=49.342µs 08:24:02 mariadb | 2024-04-26 8:21:32 0 [Note] InnoDB: File system buffers for log disabled (block size=512 bytes) 08:24:02 policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 08:24:02 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 08:24:02 zookeeper | [2024-04-26 08:21:32,775] INFO Server environment:host.name=09fae81f821c (org.apache.zookeeper.server.ZooKeeperServer) 08:24:02 kafka | group.consumer.max.session.timeout.ms = 60000 08:24:02 policy-db-migrator | -------------- 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.600910455Z level=info msg="Executing migration" id="create session table" 08:24:02 mariadb | 2024-04-26 8:21:33 0 [Note] InnoDB: 128 rollback segments are active. 08:24:02 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 08:24:02 policy-pap | receive.buffer.bytes = 65536 08:24:02 zookeeper | [2024-04-26 08:21:32,775] INFO Server environment:java.version=11.0.22 (org.apache.zookeeper.server.ZooKeeperServer) 08:24:02 kafka | group.consumer.max.size = 2147483647 08:24:02 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_occurrences (name VARCHAR(120) NULL, version VARCHAR(20) NULL, OCCURRENCES INT DEFAULT NULL) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.601843253Z level=info msg="Migration successfully executed" id="create session table" duration=932.848µs 08:24:02 mariadb | 2024-04-26 8:21:33 0 [Note] InnoDB: Setting file './ibtmp1' size to 12.000MiB. Physically writing the file full; Please wait ... 08:24:02 policy-apex-pdp | sasl.login.callback.handler.class = null 08:24:02 policy-pap | reconnect.backoff.max.ms = 1000 08:24:02 zookeeper | [2024-04-26 08:21:32,775] INFO Server environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.server.ZooKeeperServer) 08:24:02 kafka | group.consumer.min.heartbeat.interval.ms = 5000 08:24:02 policy-db-migrator | -------------- 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.612188767Z level=info msg="Executing migration" id="Drop old table playlist table" 08:24:02 mariadb | 2024-04-26 8:21:33 0 [Note] InnoDB: File './ibtmp1' size is now 12.000MiB. 08:24:02 policy-apex-pdp | sasl.login.class = null 08:24:02 policy-pap | reconnect.backoff.ms = 50 08:24:02 zookeeper | [2024-04-26 08:21:32,775] INFO Server environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.server.ZooKeeperServer) 08:24:02 kafka | group.consumer.min.session.timeout.ms = 45000 08:24:02 policy-db-migrator | 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.612326784Z level=info msg="Migration successfully executed" id="Drop old table playlist table" duration=126.376µs 08:24:02 mariadb | 2024-04-26 8:21:33 0 [Note] InnoDB: log sequence number 347307; transaction id 299 08:24:02 policy-apex-pdp | sasl.login.connect.timeout.ms = null 08:24:02 policy-pap | request.timeout.ms = 30000 08:24:02 kafka | group.consumer.session.timeout.ms = 45000 08:24:02 zookeeper | [2024-04-26 08:21:32,775] INFO Server environment:java.class.path=/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/connect-json-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/commons-validator-1.7.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.100.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/commons-io-2.11.0.jar:/usr/bin/../share/java/kafka/kafka-clients-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.11.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.100.Final.jar:/usr/bin/../share/java/kafka/connect-runtime-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/trogdor-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/connect-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/kafka-streams-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.4.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.100.Final.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/kafka-storage-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.5-1.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/scala-library-2.13.11.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/connect-transforms-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jose4j-0.9.4.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/reflections-0.10.2.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/commons-beanutils-1.9.4.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/commons-digester-2.1.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/connect-mirror-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-raft-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.8.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.4.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka-tools-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/checker-qual-3.19.0.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/pcollections-4.0.1.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/jsr305-3.0.2.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.100.Final.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.100.Final.jar:/usr/bin/../share/java/kafka/commons-collections-3.2.2.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/caffeine-2.9.3.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.1.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/netty-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jline-3.25.1.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.54.v20240208.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/kafka-shell-7.6.1-ccs.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.100.Final.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.server.ZooKeeperServer) 08:24:02 policy-db-migrator | 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.615054005Z level=info msg="Executing migration" id="Drop old table playlist_item table" 08:24:02 mariadb | 2024-04-26 8:21:33 0 [Note] InnoDB: Loading buffer pool(s) from /var/lib/mysql/ib_buffer_pool 08:24:02 policy-apex-pdp | sasl.login.read.timeout.ms = null 08:24:02 policy-pap | retry.backoff.ms = 100 08:24:02 kafka | group.coordinator.new.enable = false 08:24:02 zookeeper | [2024-04-26 08:21:32,775] INFO Server environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.server.ZooKeeperServer) 08:24:02 policy-db-migrator | > upgrade 0410-jpatoscarequirement_properties.sql 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.615119449Z level=info msg="Migration successfully executed" id="Drop old table playlist_item table" duration=63.123µs 08:24:02 mariadb | 2024-04-26 8:21:33 0 [Note] Plugin 'FEEDBACK' is disabled. 08:24:02 policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 08:24:02 policy-pap | sasl.client.callback.handler.class = null 08:24:02 kafka | group.coordinator.threads = 1 08:24:02 zookeeper | [2024-04-26 08:21:32,775] INFO Server environment:java.io.tmpdir=/tmp (org.apache.zookeeper.server.ZooKeeperServer) 08:24:02 policy-db-migrator | -------------- 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.617189415Z level=info msg="Executing migration" id="create playlist table v2" 08:24:02 mariadb | 2024-04-26 8:21:33 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. 08:24:02 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 08:24:02 policy-pap | sasl.jaas.config = null 08:24:02 kafka | group.initial.rebalance.delay.ms = 3000 08:24:02 zookeeper | [2024-04-26 08:21:32,775] INFO Server environment:java.compiler= (org.apache.zookeeper.server.ZooKeeperServer) 08:24:02 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.617877821Z level=info msg="Migration successfully executed" id="create playlist table v2" duration=684.976µs 08:24:02 mariadb | 2024-04-26 8:21:33 0 [Warning] You need to use --log-bin to make --expire-logs-days or --binlog-expire-logs-seconds work. 08:24:02 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 08:24:02 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 08:24:02 kafka | group.max.session.timeout.ms = 1800000 08:24:02 zookeeper | [2024-04-26 08:21:32,775] INFO Server environment:os.name=Linux (org.apache.zookeeper.server.ZooKeeperServer) 08:24:02 policy-db-migrator | -------------- 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.621838186Z level=info msg="Executing migration" id="create playlist item table v2" 08:24:02 mariadb | 2024-04-26 8:21:33 0 [Note] Server socket created on IP: '0.0.0.0'. 08:24:02 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 08:24:02 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 08:24:02 kafka | group.max.size = 2147483647 08:24:02 zookeeper | [2024-04-26 08:21:32,776] INFO Server environment:os.arch=amd64 (org.apache.zookeeper.server.ZooKeeperServer) 08:24:02 policy-db-migrator | 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.622360322Z level=info msg="Migration successfully executed" id="create playlist item table v2" duration=507.155µs 08:24:02 mariadb | 2024-04-26 8:21:33 0 [Note] Server socket created on IP: '::'. 08:24:02 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 08:24:02 policy-pap | sasl.kerberos.service.name = null 08:24:02 kafka | group.min.session.timeout.ms = 6000 08:24:02 zookeeper | [2024-04-26 08:21:32,776] INFO Server environment:os.version=4.15.0-192-generic (org.apache.zookeeper.server.ZooKeeperServer) 08:24:02 policy-db-migrator | 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.62754307Z level=info msg="Executing migration" id="Update playlist table charset" 08:24:02 mariadb | 2024-04-26 8:21:33 0 [Note] mariadbd: ready for connections. 08:24:02 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 08:24:02 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 08:24:02 kafka | initial.broker.registration.timeout.ms = 60000 08:24:02 zookeeper | [2024-04-26 08:21:32,776] INFO Server environment:user.name=appuser (org.apache.zookeeper.server.ZooKeeperServer) 08:24:02 policy-db-migrator | > upgrade 0420-jpatoscaservicetemplate_metadata.sql 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.627563151Z level=info msg="Migration successfully executed" id="Update playlist table charset" duration=20.481µs 08:24:02 mariadb | Version: '10.10.2-MariaDB-1:10.10.2+maria~ubu2204' socket: '/run/mysqld/mysqld.sock' port: 3306 mariadb.org binary distribution 08:24:02 policy-apex-pdp | sasl.mechanism = GSSAPI 08:24:02 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 08:24:02 kafka | inter.broker.listener.name = PLAINTEXT 08:24:02 zookeeper | [2024-04-26 08:21:32,776] INFO Server environment:user.home=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) 08:24:02 policy-db-migrator | -------------- 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.630275831Z level=info msg="Executing migration" id="Update playlist_item table charset" 08:24:02 mariadb | 2024-04-26 8:21:33 0 [Note] InnoDB: Buffer pool(s) load completed at 240426 8:21:33 08:24:02 policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 08:24:02 policy-pap | sasl.login.callback.handler.class = null 08:24:02 kafka | inter.broker.protocol.version = 3.6-IV2 08:24:02 zookeeper | [2024-04-26 08:21:32,776] INFO Server environment:user.dir=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) 08:24:02 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaservicetemplate_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.630313613Z level=info msg="Migration successfully executed" id="Update playlist_item table charset" duration=39.462µs 08:24:02 mariadb | 2024-04-26 8:21:33 3 [Warning] Aborted connection 3 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.6' (This connection closed normally without authentication) 08:24:02 policy-apex-pdp | sasl.oauthbearer.expected.audience = null 08:24:02 policy-pap | sasl.login.class = null 08:24:02 kafka | kafka.metrics.polling.interval.secs = 10 08:24:02 zookeeper | [2024-04-26 08:21:32,776] INFO Server environment:os.memory.free=491MB (org.apache.zookeeper.server.ZooKeeperServer) 08:24:02 policy-db-migrator | -------------- 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.63451756Z level=info msg="Executing migration" id="Add playlist column created_at" 08:24:02 mariadb | 2024-04-26 8:21:33 49 [Warning] Aborted connection 49 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.7' (This connection closed normally without authentication) 08:24:02 policy-apex-pdp | sasl.oauthbearer.expected.issuer = null 08:24:02 policy-pap | sasl.login.connect.timeout.ms = null 08:24:02 kafka | kafka.metrics.reporters = [] 08:24:02 zookeeper | [2024-04-26 08:21:32,776] INFO Server environment:os.memory.max=512MB (org.apache.zookeeper.server.ZooKeeperServer) 08:24:02 policy-db-migrator | 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.639302347Z level=info msg="Migration successfully executed" id="Add playlist column created_at" duration=4.785387ms 08:24:02 mariadb | 2024-04-26 8:21:34 50 [Warning] Aborted connection 50 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.9' (This connection closed normally without authentication) 08:24:02 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 08:24:02 policy-pap | sasl.login.read.timeout.ms = null 08:24:02 kafka | leader.imbalance.check.interval.seconds = 300 08:24:02 zookeeper | [2024-04-26 08:21:32,776] INFO Server environment:os.memory.total=512MB (org.apache.zookeeper.server.ZooKeeperServer) 08:24:02 policy-db-migrator | 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.642698922Z level=info msg="Executing migration" id="Add playlist column updated_at" 08:24:02 mariadb | 2024-04-26 8:21:35 109 [Warning] Aborted connection 109 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.11' (This connection closed normally without authentication) 08:24:02 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 08:24:02 policy-pap | sasl.login.refresh.buffer.seconds = 300 08:24:02 kafka | leader.imbalance.per.broker.percentage = 10 08:24:02 zookeeper | [2024-04-26 08:21:32,776] INFO zookeeper.enableEagerACLCheck = false (org.apache.zookeeper.server.ZooKeeperServer) 08:24:02 policy-db-migrator | > upgrade 0430-jpatoscatopologytemplate_inputs.sql 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.645846175Z level=info msg="Migration successfully executed" id="Add playlist column updated_at" duration=3.146603ms 08:24:02 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 08:24:02 policy-pap | sasl.login.refresh.min.period.seconds = 60 08:24:02 kafka | listener.security.protocol.map = PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT 08:24:02 zookeeper | [2024-04-26 08:21:32,776] INFO zookeeper.digest.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) 08:24:02 policy-db-migrator | -------------- 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.652271866Z level=info msg="Executing migration" id="drop preferences table v2" 08:24:02 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null 08:24:02 policy-pap | sasl.login.refresh.window.factor = 0.8 08:24:02 kafka | listeners = PLAINTEXT://0.0.0.0:9092,PLAINTEXT_HOST://0.0.0.0:29092 08:24:02 zookeeper | [2024-04-26 08:21:32,776] INFO zookeeper.closeSessionTxn.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) 08:24:02 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscatopologytemplate_inputs (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, INPUTS LONGBLOB DEFAULT NULL, INPUTS_KEY VARCHAR(255) NULL) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.653798035Z level=info msg="Migration successfully executed" id="drop preferences table v2" duration=1.520798ms 08:24:02 policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope 08:24:02 policy-pap | sasl.login.refresh.window.jitter = 0.05 08:24:02 kafka | log.cleaner.backoff.ms = 15000 08:24:02 zookeeper | [2024-04-26 08:21:32,776] INFO zookeeper.flushDelay = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) 08:24:02 policy-db-migrator | -------------- 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.657482425Z level=info msg="Executing migration" id="drop preferences table v3" 08:24:02 policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub 08:24:02 policy-pap | sasl.login.retry.backoff.max.ms = 10000 08:24:02 kafka | log.cleaner.dedupe.buffer.size = 134217728 08:24:02 zookeeper | [2024-04-26 08:21:32,776] INFO zookeeper.maxWriteQueuePollTime = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) 08:24:02 policy-db-migrator | 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.657559199Z level=info msg="Migration successfully executed" id="drop preferences table v3" duration=77.194µs 08:24:02 policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null 08:24:02 policy-pap | sasl.login.retry.backoff.ms = 100 08:24:02 kafka | log.cleaner.delete.retention.ms = 86400000 08:24:02 zookeeper | [2024-04-26 08:21:32,776] INFO zookeeper.maxBatchSize=1000 (org.apache.zookeeper.server.ZooKeeperServer) 08:24:02 policy-db-migrator | 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.6620305Z level=info msg="Executing migration" id="create preferences table v3" 08:24:02 policy-pap | sasl.mechanism = GSSAPI 08:24:02 kafka | log.cleaner.enable = true 08:24:02 zookeeper | [2024-04-26 08:21:32,776] INFO zookeeper.intBufferStartingSizeBytes = 1024 (org.apache.zookeeper.server.ZooKeeperServer) 08:24:02 zookeeper | [2024-04-26 08:21:32,777] INFO Weighed connection throttling is disabled (org.apache.zookeeper.server.BlueThrottle) 08:24:02 policy-db-migrator | > upgrade 0440-pdpgroup_pdpsubgroup.sql 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.662787269Z level=info msg="Migration successfully executed" id="create preferences table v3" duration=756.719µs 08:24:02 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 08:24:02 kafka | log.cleaner.io.buffer.load.factor = 0.9 08:24:02 policy-apex-pdp | security.protocol = PLAINTEXT 08:24:02 zookeeper | [2024-04-26 08:21:32,778] INFO minSessionTimeout set to 4000 ms (org.apache.zookeeper.server.ZooKeeperServer) 08:24:02 policy-db-migrator | -------------- 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.665155362Z level=info msg="Executing migration" id="Update preferences table charset" 08:24:02 policy-pap | sasl.oauthbearer.expected.audience = null 08:24:02 kafka | log.cleaner.io.buffer.size = 524288 08:24:02 policy-apex-pdp | security.providers = null 08:24:02 zookeeper | [2024-04-26 08:21:32,778] INFO maxSessionTimeout set to 40000 ms (org.apache.zookeeper.server.ZooKeeperServer) 08:24:02 zookeeper | [2024-04-26 08:21:32,778] INFO getData response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) 08:24:02 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpgroup_pdpsubgroup (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPGROUP_PDPSUBGROUP (name, version, parentLocalName, localName, parentKeyVersion, parentKeyName)) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.665175283Z level=info msg="Migration successfully executed" id="Update preferences table charset" duration=20.321µs 08:24:02 kafka | log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 08:24:02 policy-apex-pdp | send.buffer.bytes = 131072 08:24:02 policy-pap | sasl.oauthbearer.expected.issuer = null 08:24:02 zookeeper | [2024-04-26 08:21:32,778] INFO getChildren response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) 08:24:02 policy-db-migrator | -------------- 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.668074322Z level=info msg="Executing migration" id="Add column team_id in preferences" 08:24:02 kafka | log.cleaner.max.compaction.lag.ms = 9223372036854775807 08:24:02 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 08:24:02 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 08:24:02 zookeeper | [2024-04-26 08:21:32,779] INFO zookeeper.pathStats.slotCapacity = 60 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 08:24:02 policy-db-migrator | 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.670325968Z level=info msg="Migration successfully executed" id="Add column team_id in preferences" duration=2.251966ms 08:24:02 kafka | log.cleaner.min.cleanable.ratio = 0.5 08:24:02 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 08:24:02 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 08:24:02 zookeeper | [2024-04-26 08:21:32,779] INFO zookeeper.pathStats.slotDuration = 15 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 08:24:02 policy-db-migrator | 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.672855089Z level=info msg="Executing migration" id="Update team_id column values in preferences" 08:24:02 kafka | log.cleaner.min.compaction.lag.ms = 0 08:24:02 policy-apex-pdp | ssl.cipher.suites = null 08:24:02 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 08:24:02 zookeeper | [2024-04-26 08:21:32,779] INFO zookeeper.pathStats.maxDepth = 6 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 08:24:02 policy-db-migrator | > upgrade 0450-pdpgroup.sql 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.672964484Z level=info msg="Migration successfully executed" id="Update team_id column values in preferences" duration=109.615µs 08:24:02 kafka | log.cleaner.threads = 1 08:24:02 policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 08:24:02 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 08:24:02 zookeeper | [2024-04-26 08:21:32,779] INFO zookeeper.pathStats.initialDelay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 08:24:02 policy-db-migrator | -------------- 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.682327287Z level=info msg="Executing migration" id="Add column week_start in preferences" 08:24:02 kafka | log.cleanup.policy = [delete] 08:24:02 policy-apex-pdp | ssl.endpoint.identification.algorithm = https 08:24:02 policy-pap | sasl.oauthbearer.scope.claim.name = scope 08:24:02 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpgroup (`DESCRIPTION` VARCHAR(255) NULL, PDPGROUPSTATE INT DEFAULT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_PDPGROUP (name, version)) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.686216849Z level=info msg="Migration successfully executed" id="Add column week_start in preferences" duration=3.893301ms 08:24:02 policy-apex-pdp | ssl.engine.factory.class = null 08:24:02 policy-pap | sasl.oauthbearer.sub.claim.name = sub 08:24:02 policy-db-migrator | -------------- 08:24:02 zookeeper | [2024-04-26 08:21:32,779] INFO zookeeper.pathStats.delay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 08:24:02 kafka | log.dir = /tmp/kafka-logs 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.69362652Z level=info msg="Executing migration" id="Add column preferences.json_data" 08:24:02 policy-apex-pdp | ssl.key.password = null 08:24:02 policy-pap | sasl.oauthbearer.token.endpoint.url = null 08:24:02 policy-db-migrator | 08:24:02 zookeeper | [2024-04-26 08:21:32,779] INFO zookeeper.pathStats.enabled = false (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 08:24:02 kafka | log.dirs = /var/lib/kafka/data 08:24:02 policy-pap | security.protocol = PLAINTEXT 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.696916Z level=info msg="Migration successfully executed" id="Add column preferences.json_data" duration=3.28936ms 08:24:02 policy-db-migrator | 08:24:02 zookeeper | [2024-04-26 08:21:32,782] INFO The max bytes for all large requests are set to 104857600 (org.apache.zookeeper.server.ZooKeeperServer) 08:24:02 kafka | log.flush.interval.messages = 9223372036854775807 08:24:02 policy-apex-pdp | ssl.keymanager.algorithm = SunX509 08:24:02 policy-pap | security.providers = null 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.701672626Z level=info msg="Executing migration" id="alter preferences.json_data to mediumtext v1" 08:24:02 policy-db-migrator | > upgrade 0460-pdppolicystatus.sql 08:24:02 zookeeper | [2024-04-26 08:21:32,782] INFO The large request threshold is set to -1 (org.apache.zookeeper.server.ZooKeeperServer) 08:24:02 kafka | log.flush.interval.ms = null 08:24:02 policy-apex-pdp | ssl.keystore.certificate.chain = null 08:24:02 policy-pap | send.buffer.bytes = 131072 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.701871996Z level=info msg="Migration successfully executed" id="alter preferences.json_data to mediumtext v1" duration=199.56µs 08:24:02 policy-db-migrator | -------------- 08:24:02 zookeeper | [2024-04-26 08:21:32,782] INFO zookeeper.enforce.auth.enabled = false (org.apache.zookeeper.server.AuthenticationHelper) 08:24:02 kafka | log.flush.offset.checkpoint.interval.ms = 60000 08:24:02 policy-apex-pdp | ssl.keystore.key = null 08:24:02 policy-pap | session.timeout.ms = 45000 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.705941866Z level=info msg="Executing migration" id="Add preferences index org_id" 08:24:02 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdppolicystatus (DEPLOY BOOLEAN DEFAULT 0, PDPGROUP VARCHAR(255) DEFAULT NULL, PDPTYPE VARCHAR(255) DEFAULT NULL, STATE INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PRIMARY KEY PK_PDPPOLICYSTATUS (parentLocalName, localName, parentKeyVersion, parentKeyName)) 08:24:02 zookeeper | [2024-04-26 08:21:32,782] INFO zookeeper.enforce.auth.schemes = [] (org.apache.zookeeper.server.AuthenticationHelper) 08:24:02 kafka | log.flush.scheduler.interval.ms = 9223372036854775807 08:24:02 policy-apex-pdp | ssl.keystore.location = null 08:24:02 policy-pap | socket.connection.setup.timeout.max.ms = 30000 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.706942218Z level=info msg="Migration successfully executed" id="Add preferences index org_id" duration=997.581µs 08:24:02 policy-db-migrator | -------------- 08:24:02 zookeeper | [2024-04-26 08:21:32,782] INFO Created server with tickTime 2000 ms minSessionTimeout 4000 ms maxSessionTimeout 40000 ms clientPortListenBacklog -1 datadir /var/lib/zookeeper/log/version-2 snapdir /var/lib/zookeeper/data/version-2 (org.apache.zookeeper.server.ZooKeeperServer) 08:24:02 kafka | log.flush.start.offset.checkpoint.interval.ms = 60000 08:24:02 policy-apex-pdp | ssl.keystore.password = null 08:24:02 policy-pap | socket.connection.setup.timeout.ms = 10000 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.714166251Z level=info msg="Executing migration" id="Add preferences index user_id" 08:24:02 policy-db-migrator | 08:24:02 zookeeper | [2024-04-26 08:21:32,804] INFO Logging initialized @544ms to org.eclipse.jetty.util.log.Slf4jLog (org.eclipse.jetty.util.log) 08:24:02 kafka | log.index.interval.bytes = 4096 08:24:02 policy-apex-pdp | ssl.keystore.type = JKS 08:24:02 policy-pap | ssl.cipher.suites = null 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.714993074Z level=info msg="Migration successfully executed" id="Add preferences index user_id" duration=830.153µs 08:24:02 policy-db-migrator | 08:24:02 zookeeper | [2024-04-26 08:21:32,924] WARN o.e.j.s.ServletContextHandler@311bf055{/,null,STOPPED} contextPath ends with /* (org.eclipse.jetty.server.handler.ContextHandler) 08:24:02 kafka | log.index.size.max.bytes = 10485760 08:24:02 policy-apex-pdp | ssl.protocol = TLSv1.3 08:24:02 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.718829091Z level=info msg="Executing migration" id="create alert table v1" 08:24:02 policy-db-migrator | > upgrade 0470-pdp.sql 08:24:02 zookeeper | [2024-04-26 08:21:32,924] WARN Empty contextPath (org.eclipse.jetty.server.handler.ContextHandler) 08:24:02 kafka | log.local.retention.bytes = -2 08:24:02 policy-apex-pdp | ssl.provider = null 08:24:02 policy-pap | ssl.endpoint.identification.algorithm = https 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.719645373Z level=info msg="Migration successfully executed" id="create alert table v1" duration=815.722µs 08:24:02 policy-db-migrator | -------------- 08:24:02 zookeeper | [2024-04-26 08:21:32,941] INFO jetty-9.4.54.v20240208; built: 2024-02-08T19:42:39.027Z; git: cef3fbd6d736a21e7d541a5db490381d95a2047d; jvm 11.0.22+7-LTS (org.eclipse.jetty.server.Server) 08:24:02 kafka | log.local.retention.ms = -2 08:24:02 policy-apex-pdp | ssl.secure.random.implementation = null 08:24:02 policy-pap | ssl.engine.factory.class = null 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.724292663Z level=info msg="Executing migration" id="add index alert org_id & id " 08:24:02 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdp (HEALTHY INT DEFAULT NULL, MESSAGE VARCHAR(255) DEFAULT NULL, PDPSTATE INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDP (parentLocalName, localName, parentKeyVersion, parentKeyName)) 08:24:02 zookeeper | [2024-04-26 08:21:32,967] INFO DefaultSessionIdManager workerName=node0 (org.eclipse.jetty.server.session) 08:24:02 kafka | log.message.downconversion.enable = true 08:24:02 policy-apex-pdp | ssl.trustmanager.algorithm = PKIX 08:24:02 policy-pap | ssl.key.password = null 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.725156358Z level=info msg="Migration successfully executed" id="add index alert org_id & id " duration=863.625µs 08:24:02 policy-db-migrator | -------------- 08:24:02 zookeeper | [2024-04-26 08:21:32,968] INFO No SessionScavenger set, using defaults (org.eclipse.jetty.server.session) 08:24:02 kafka | log.message.format.version = 3.0-IV1 08:24:02 policy-apex-pdp | ssl.truststore.certificates = null 08:24:02 policy-pap | ssl.keymanager.algorithm = SunX509 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.728702851Z level=info msg="Executing migration" id="add index alert state" 08:24:02 policy-db-migrator | 08:24:02 zookeeper | [2024-04-26 08:21:32,969] INFO node0 Scavenging every 660000ms (org.eclipse.jetty.server.session) 08:24:02 kafka | log.message.timestamp.after.max.ms = 9223372036854775807 08:24:02 policy-apex-pdp | ssl.truststore.location = null 08:24:02 policy-pap | ssl.keystore.certificate.chain = null 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.729801868Z level=info msg="Migration successfully executed" id="add index alert state" duration=1.100298ms 08:24:02 policy-db-migrator | 08:24:02 zookeeper | [2024-04-26 08:21:32,971] WARN ServletContext@o.e.j.s.ServletContextHandler@311bf055{/,null,STARTING} has uncovered http methods for path: /* (org.eclipse.jetty.security.SecurityHandler) 08:24:02 kafka | log.message.timestamp.before.max.ms = 9223372036854775807 08:24:02 policy-apex-pdp | ssl.truststore.password = null 08:24:02 policy-pap | ssl.keystore.key = null 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.734476309Z level=info msg="Executing migration" id="add index alert dashboard_id" 08:24:02 policy-db-migrator | > upgrade 0480-pdpstatistics.sql 08:24:02 zookeeper | [2024-04-26 08:21:32,978] INFO Started o.e.j.s.ServletContextHandler@311bf055{/,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler) 08:24:02 kafka | log.message.timestamp.difference.max.ms = 9223372036854775807 08:24:02 policy-pap | ssl.keystore.location = null 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.735812077Z level=info msg="Migration successfully executed" id="add index alert dashboard_id" duration=1.335728ms 08:24:02 policy-db-migrator | -------------- 08:24:02 zookeeper | [2024-04-26 08:21:32,990] INFO Started ServerConnector@6f53b8a{HTTP/1.1, (http/1.1)}{0.0.0.0:8080} (org.eclipse.jetty.server.AbstractConnector) 08:24:02 kafka | log.message.timestamp.type = CreateTime 08:24:02 policy-apex-pdp | ssl.truststore.type = JKS 08:24:02 policy-pap | ssl.keystore.password = null 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.742648871Z level=info msg="Executing migration" id="Create alert_rule_tag table v1" 08:24:02 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpstatistics (PDPGROUPNAME VARCHAR(120) NULL, PDPSUBGROUPNAME VARCHAR(120) NULL, POLICYDEPLOYCOUNT BIGINT DEFAULT NULL, POLICYDEPLOYFAILCOUNT BIGINT DEFAULT NULL, POLICYDEPLOYSUCCESSCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDFAILCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDSUCCESSCOUNT BIGINT DEFAULT NULL, timeStamp datetime NOT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_PDPSTATISTICS (timeStamp, name, version)) 08:24:02 zookeeper | [2024-04-26 08:21:32,991] INFO Started @731ms (org.eclipse.jetty.server.Server) 08:24:02 kafka | log.preallocate = false 08:24:02 policy-apex-pdp | transaction.timeout.ms = 60000 08:24:02 policy-pap | ssl.keystore.type = JKS 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.743314435Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v1" duration=664.705µs 08:24:02 policy-db-migrator | -------------- 08:24:02 zookeeper | [2024-04-26 08:21:32,991] INFO Started AdminServer on address 0.0.0.0, port 8080 and command URL /commands (org.apache.zookeeper.server.admin.JettyAdminServer) 08:24:02 kafka | log.retention.bytes = -1 08:24:02 policy-apex-pdp | transactional.id = null 08:24:02 policy-pap | ssl.protocol = TLSv1.3 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.746257447Z level=info msg="Executing migration" id="Add unique index alert_rule_tag.alert_id_tag_id" 08:24:02 policy-db-migrator | 08:24:02 zookeeper | [2024-04-26 08:21:32,994] INFO Using org.apache.zookeeper.server.NIOServerCnxnFactory as server connection factory (org.apache.zookeeper.server.ServerCnxnFactory) 08:24:02 kafka | log.retention.check.interval.ms = 300000 08:24:02 policy-apex-pdp | value.serializer = class org.apache.kafka.common.serialization.StringSerializer 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.747133593Z level=info msg="Migration successfully executed" id="Add unique index alert_rule_tag.alert_id_tag_id" duration=875.705µs 08:24:02 policy-db-migrator | 08:24:02 zookeeper | [2024-04-26 08:21:32,995] WARN maxCnxns is not configured, using default value 0. (org.apache.zookeeper.server.ServerCnxnFactory) 08:24:02 kafka | log.retention.hours = 168 08:24:02 policy-apex-pdp | 08:24:02 policy-apex-pdp | [2024-04-26T08:22:04.663+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.750470064Z level=info msg="Executing migration" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" 08:24:02 policy-db-migrator | > upgrade 0490-pdpsubgroup_pdp.sql 08:24:02 zookeeper | [2024-04-26 08:21:32,996] INFO Configuring NIO connection handler with 10s sessionless connection timeout, 2 selector thread(s), 16 worker threads, and 64 kB direct buffers. (org.apache.zookeeper.server.NIOServerCnxnFactory) 08:24:02 kafka | log.retention.minutes = null 08:24:02 policy-pap | ssl.provider = null 08:24:02 policy-apex-pdp | [2024-04-26T08:22:04.678+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.751686897Z level=info msg="Migration successfully executed" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" duration=1.216713ms 08:24:02 policy-db-migrator | -------------- 08:24:02 zookeeper | [2024-04-26 08:21:32,997] INFO binding to port 0.0.0.0/0.0.0.0:2181 (org.apache.zookeeper.server.NIOServerCnxnFactory) 08:24:02 kafka | log.retention.ms = null 08:24:02 policy-pap | ssl.secure.random.implementation = null 08:24:02 policy-apex-pdp | [2024-04-26T08:22:04.678+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.757422873Z level=info msg="Executing migration" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" 08:24:02 zookeeper | [2024-04-26 08:21:33,012] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) 08:24:02 kafka | log.roll.hours = 168 08:24:02 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpsubgroup_pdp (pdpParentKeyName VARCHAR(120) NOT NULL, pdpParentKeyVersion VARCHAR(15) NOT NULL, pdpParentLocalName VARCHAR(120) NOT NULL, pdpLocalName VARCHAR(120) NOT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPSUBGROUP_PDP (pdpParentKeyName, pdpParentKeyVersion, pdpParentLocalName, pdpLocalName, parentLocalName, localName, parentKeyVersion, parentKeyName)) 08:24:02 policy-pap | ssl.trustmanager.algorithm = PKIX 08:24:02 policy-apex-pdp | [2024-04-26T08:22:04.678+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1714119724678 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.767170016Z level=info msg="Migration successfully executed" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" duration=9.747003ms 08:24:02 zookeeper | [2024-04-26 08:21:33,012] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) 08:24:02 kafka | log.roll.jitter.hours = 0 08:24:02 policy-db-migrator | -------------- 08:24:02 policy-pap | ssl.truststore.certificates = null 08:24:02 policy-apex-pdp | [2024-04-26T08:22:04.679+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=41465092-4801-404b-834e-cb5739a089eb, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.770172362Z level=info msg="Executing migration" id="Create alert_rule_tag table v2" 08:24:02 zookeeper | [2024-04-26 08:21:33,013] INFO zookeeper.snapshotSizeFactor = 0.33 (org.apache.zookeeper.server.ZKDatabase) 08:24:02 kafka | log.roll.jitter.ms = null 08:24:02 policy-db-migrator | 08:24:02 policy-pap | ssl.truststore.location = null 08:24:02 policy-apex-pdp | [2024-04-26T08:22:04.679+00:00|INFO|ServiceManager|main] service manager starting set alive 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.770935621Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v2" duration=762.239µs 08:24:02 zookeeper | [2024-04-26 08:21:33,013] INFO zookeeper.commitLogCount=500 (org.apache.zookeeper.server.ZKDatabase) 08:24:02 kafka | log.roll.ms = null 08:24:02 policy-db-migrator | 08:24:02 policy-pap | ssl.truststore.password = null 08:24:02 policy-apex-pdp | [2024-04-26T08:22:04.679+00:00|INFO|ServiceManager|main] service manager starting register pdp status context object 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.776225034Z level=info msg="Executing migration" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" 08:24:02 zookeeper | [2024-04-26 08:21:33,018] INFO zookeeper.snapshot.compression.method = CHECKED (org.apache.zookeeper.server.persistence.SnapStream) 08:24:02 kafka | log.segment.bytes = 1073741824 08:24:02 policy-db-migrator | > upgrade 0500-pdpsubgroup.sql 08:24:02 policy-pap | ssl.truststore.type = JKS 08:24:02 policy-apex-pdp | [2024-04-26T08:22:04.680+00:00|INFO|ServiceManager|main] service manager starting topic sinks 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.777116569Z level=info msg="Migration successfully executed" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" duration=891.446µs 08:24:02 zookeeper | [2024-04-26 08:21:33,018] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) 08:24:02 kafka | log.segment.delete.delay.ms = 60000 08:24:02 policy-db-migrator | -------------- 08:24:02 policy-apex-pdp | [2024-04-26T08:22:04.680+00:00|INFO|ServiceManager|main] service manager starting Pdp Status publisher 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.78641605Z level=info msg="Executing migration" id="copy alert_rule_tag v1 to v2" 08:24:02 zookeeper | [2024-04-26 08:21:33,021] INFO Snapshot loaded in 7 ms, highest zxid is 0x0, digest is 1371985504 (org.apache.zookeeper.server.ZKDatabase) 08:24:02 kafka | max.connection.creation.rate = 2147483647 08:24:02 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpsubgroup (CURRENTINSTANCECOUNT INT DEFAULT NULL, DESIREDINSTANCECOUNT INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPSUBGROUP (parentLocalName, localName, parentKeyVersion, parentKeyName)) 08:24:02 policy-apex-pdp | [2024-04-26T08:22:04.681+00:00|INFO|ServiceManager|main] service manager starting Register pdp update listener 08:24:02 zookeeper | [2024-04-26 08:21:33,021] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) 08:24:02 kafka | max.connections = 2147483647 08:24:02 policy-db-migrator | -------------- 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.787051323Z level=info msg="Migration successfully executed" id="copy alert_rule_tag v1 to v2" duration=633.773µs 08:24:02 zookeeper | [2024-04-26 08:21:33,022] INFO Snapshot taken in 1 ms (org.apache.zookeeper.server.ZooKeeperServer) 08:24:02 policy-db-migrator | 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.791477051Z level=info msg="Executing migration" id="drop table alert_rule_tag_v1" 08:24:02 policy-apex-pdp | [2024-04-26T08:22:04.681+00:00|INFO|ServiceManager|main] service manager starting Register pdp state change request dispatcher 08:24:02 kafka | max.connections.per.ip = 2147483647 08:24:02 zookeeper | [2024-04-26 08:21:33,032] INFO zookeeper.request_throttler.shutdownTimeout = 10000 ms (org.apache.zookeeper.server.RequestThrottler) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.792207259Z level=info msg="Migration successfully executed" id="drop table alert_rule_tag_v1" duration=732.017µs 08:24:02 policy-apex-pdp | [2024-04-26T08:22:04.681+00:00|INFO|ServiceManager|main] service manager starting Message Dispatcher 08:24:02 kafka | max.connections.per.ip.overrides = 08:24:02 policy-db-migrator | 08:24:02 zookeeper | [2024-04-26 08:21:33,033] INFO PrepRequestProcessor (sid:0) started, reconfigEnabled=false (org.apache.zookeeper.server.PrepRequestProcessor) 08:24:02 policy-apex-pdp | [2024-04-26T08:22:04.682+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=385d2de3-e329-4c2e-8254-58c110e4f277, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@60a2630a 08:24:02 kafka | max.incremental.fetch.session.cache.slots = 1000 08:24:02 policy-db-migrator | > upgrade 0510-toscacapabilityassignment.sql 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.795235645Z level=info msg="Executing migration" id="create alert_notification table v1" 08:24:02 zookeeper | [2024-04-26 08:21:33,047] INFO Using checkIntervalMs=60000 maxPerMinute=10000 maxNeverUsedIntervalMs=0 (org.apache.zookeeper.server.ContainerManager) 08:24:02 kafka | message.max.bytes = 1048588 08:24:02 policy-db-migrator | -------------- 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.796269328Z level=info msg="Migration successfully executed" id="create alert_notification table v1" duration=1.030154ms 08:24:02 policy-apex-pdp | [2024-04-26T08:22:04.682+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=385d2de3-e329-4c2e-8254-58c110e4f277, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: register: start not attempted 08:24:02 zookeeper | [2024-04-26 08:21:33,048] INFO ZooKeeper audit is disabled. (org.apache.zookeeper.audit.ZKAuditProvider) 08:24:02 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignment (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENT(name, version)) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.801420104Z level=info msg="Executing migration" id="Add column is_default" 08:24:02 policy-apex-pdp | [2024-04-26T08:22:04.682+00:00|INFO|ServiceManager|main] service manager starting Create REST server 08:24:02 kafka | metadata.log.dir = null 08:24:02 zookeeper | [2024-04-26 08:21:34,358] INFO Creating new log file: log.1 (org.apache.zookeeper.server.persistence.FileTxnLog) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.805063742Z level=info msg="Migration successfully executed" id="Add column is_default" duration=3.643218ms 08:24:02 policy-apex-pdp | [2024-04-26T08:22:04.694+00:00|INFO|OrderedServiceImpl|Timer-0] ***** OrderedServiceImpl implementers: 08:24:02 kafka | metadata.log.max.record.bytes.between.snapshots = 20971520 08:24:02 policy-db-migrator | -------------- 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.863234534Z level=info msg="Executing migration" id="Add column frequency" 08:24:02 policy-apex-pdp | [] 08:24:02 kafka | metadata.log.max.snapshot.interval.ms = 3600000 08:24:02 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 08:24:02 policy-db-migrator | 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.867487534Z level=info msg="Migration successfully executed" id="Add column frequency" duration=4.262541ms 08:24:02 policy-apex-pdp | [2024-04-26T08:22:04.696+00:00|INFO|network|Timer-0] [OUT|KAFKA|policy-pdp-pap] 08:24:02 kafka | metadata.log.segment.bytes = 1073741824 08:24:02 policy-db-migrator | 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.87575115Z level=info msg="Executing migration" id="Add column send_reminder" 08:24:02 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"b46d631f-bb6e-4436-9510-4ccf91eae87a","timestampMs":1714119724681,"name":"apex-b183f0da-bf00-44a3-b3ae-398d8035a48d","pdpGroup":"defaultGroup"} 08:24:02 kafka | metadata.log.segment.min.bytes = 8388608 08:24:02 policy-pap | 08:24:02 policy-db-migrator | > upgrade 0520-toscacapabilityassignments.sql 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.878624658Z level=info msg="Migration successfully executed" id="Add column send_reminder" duration=2.873298ms 08:24:02 policy-apex-pdp | [2024-04-26T08:22:04.837+00:00|INFO|ServiceManager|main] service manager starting Rest Server 08:24:02 kafka | metadata.log.segment.ms = 604800000 08:24:02 policy-pap | [2024-04-26T08:22:01.122+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 08:24:02 policy-db-migrator | -------------- 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.881070995Z level=info msg="Executing migration" id="Add column disable_resolve_message" 08:24:02 policy-apex-pdp | [2024-04-26T08:22:04.838+00:00|INFO|ServiceManager|main] service manager starting 08:24:02 kafka | metadata.max.idle.interval.ms = 500 08:24:02 policy-pap | [2024-04-26T08:22:01.122+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 08:24:02 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignments (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENTS (name, version)) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.886126916Z level=info msg="Migration successfully executed" id="Add column disable_resolve_message" duration=5.054391ms 08:24:02 policy-apex-pdp | [2024-04-26T08:22:04.838+00:00|INFO|ServiceManager|main] service manager starting REST RestServerParameters 08:24:02 kafka | metadata.max.retention.bytes = 104857600 08:24:02 policy-pap | [2024-04-26T08:22:01.122+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1714119721120 08:24:02 policy-db-migrator | -------------- 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.889435297Z level=info msg="Executing migration" id="add index alert_notification org_id & name" 08:24:02 policy-apex-pdp | [2024-04-26T08:22:04.838+00:00|INFO|JettyServletServer|main] JettyJerseyServer [Jerseyservlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-72b16078==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@aa16c20f{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-d78795==org.glassfish.jersey.servlet.ServletContainer@b1764d3c{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=Server@72c927f1{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@1ac85b0c{/,null,STOPPED}, connector=RestServerParameters@63c5efee{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=null, servlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-72b16078==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@aa16c20f{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-d78795==org.glassfish.jersey.servlet.ServletContainer@b1764d3c{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 08:24:02 kafka | metadata.max.retention.ms = 604800000 08:24:02 policy-db-migrator | 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.890186186Z level=info msg="Migration successfully executed" id="add index alert_notification org_id & name" duration=751.499µs 08:24:02 policy-apex-pdp | [2024-04-26T08:22:04.849+00:00|INFO|ServiceManager|main] service manager started 08:24:02 kafka | metric.reporters = [] 08:24:02 policy-db-migrator | 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.894905959Z level=info msg="Executing migration" id="Update alert table charset" 08:24:02 policy-apex-pdp | [2024-04-26T08:22:04.849+00:00|INFO|ServiceManager|main] service manager started 08:24:02 kafka | metrics.num.samples = 2 08:24:02 policy-pap | [2024-04-26T08:22:01.131+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-db954cd2-8764-4a44-90af-3bb7f2069f83-1, groupId=db954cd2-8764-4a44-90af-3bb7f2069f83] Subscribed to topic(s): policy-pdp-pap 08:24:02 policy-db-migrator | > upgrade 0530-toscacapabilityassignments_toscacapabilityassignment.sql 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.894938241Z level=info msg="Migration successfully executed" id="Update alert table charset" duration=32.432µs 08:24:02 policy-apex-pdp | [2024-04-26T08:22:04.849+00:00|INFO|ApexStarterMain|main] Started policy-apex-pdp service successfully. 08:24:02 kafka | metrics.recording.level = INFO 08:24:02 policy-pap | [2024-04-26T08:22:01.132+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 08:24:02 policy-db-migrator | -------------- 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.899193431Z level=info msg="Executing migration" id="Update alert_notification table charset" 08:24:02 kafka | metrics.sample.window.ms = 30000 08:24:02 policy-pap | allow.auto.create.topics = true 08:24:02 policy-apex-pdp | [2024-04-26T08:22:04.849+00:00|INFO|JettyServletServer|RestServerParameters-6969] JettyJerseyServer [Jerseyservlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-72b16078==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@aa16c20f{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-d78795==org.glassfish.jersey.servlet.ServletContainer@b1764d3c{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=Server@72c927f1{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@1ac85b0c{/,null,STOPPED}, connector=RestServerParameters@63c5efee{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=Thread[RestServerParameters-6969,5,main], servlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-72b16078==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@aa16c20f{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-d78795==org.glassfish.jersey.servlet.ServletContainer@b1764d3c{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 08:24:02 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignments_toscacapabilityassignment (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENTS_TOSCACAPABILITYASSIGNMENT (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.899221812Z level=info msg="Migration successfully executed" id="Update alert_notification table charset" duration=29.311µs 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.901428096Z level=info msg="Executing migration" id="create notification_journal table v1" 08:24:02 policy-apex-pdp | [2024-04-26T08:22:05.013+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-385d2de3-e329-4c2e-8254-58c110e4f277-2, groupId=385d2de3-e329-4c2e-8254-58c110e4f277] Cluster ID: qUquThiHQAKlsircSK68zw 08:24:02 policy-db-migrator | -------------- 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.902158603Z level=info msg="Migration successfully executed" id="create notification_journal table v1" duration=730.027µs 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.905657294Z level=info msg="Executing migration" id="add index notification_journal org_id & alert_id & notifier_id" 08:24:02 policy-apex-pdp | [2024-04-26T08:22:05.013+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: qUquThiHQAKlsircSK68zw 08:24:02 policy-db-migrator | 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.906659665Z level=info msg="Migration successfully executed" id="add index notification_journal org_id & alert_id & notifier_id" duration=1.004561ms 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.914011755Z level=info msg="Executing migration" id="drop alert_notification_journal" 08:24:02 policy-pap | auto.commit.interval.ms = 5000 08:24:02 policy-apex-pdp | [2024-04-26T08:22:05.014+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 2 with epoch 0 08:24:02 policy-db-migrator | 08:24:02 kafka | min.insync.replicas = 1 08:24:02 policy-pap | auto.include.jmx.reporter = true 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.914608116Z level=info msg="Migration successfully executed" id="drop alert_notification_journal" duration=596.341µs 08:24:02 policy-apex-pdp | [2024-04-26T08:22:05.014+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-385d2de3-e329-4c2e-8254-58c110e4f277-2, groupId=385d2de3-e329-4c2e-8254-58c110e4f277] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) 08:24:02 policy-db-migrator | > upgrade 0540-toscacapabilitytype.sql 08:24:02 kafka | node.id = 1 08:24:02 policy-pap | auto.offset.reset = latest 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.921497752Z level=info msg="Executing migration" id="create alert_notification_state table v1" 08:24:02 policy-apex-pdp | [2024-04-26T08:22:05.021+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-385d2de3-e329-4c2e-8254-58c110e4f277-2, groupId=385d2de3-e329-4c2e-8254-58c110e4f277] (Re-)joining group 08:24:02 policy-db-migrator | -------------- 08:24:02 kafka | num.io.threads = 8 08:24:02 policy-pap | bootstrap.servers = [kafka:9092] 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.922925376Z level=info msg="Migration successfully executed" id="create alert_notification_state table v1" duration=1.433643ms 08:24:02 policy-apex-pdp | [2024-04-26T08:22:05.035+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-385d2de3-e329-4c2e-8254-58c110e4f277-2, groupId=385d2de3-e329-4c2e-8254-58c110e4f277] Request joining group due to: need to re-join with the given member-id: consumer-385d2de3-e329-4c2e-8254-58c110e4f277-2-623aa870-0e4f-4435-b6a7-fae0c0299f99 08:24:02 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPE (name, version)) 08:24:02 kafka | num.network.threads = 3 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.925851826Z level=info msg="Executing migration" id="add index alert_notification_state org_id & alert_id & notifier_id" 08:24:02 policy-apex-pdp | [2024-04-26T08:22:05.036+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-385d2de3-e329-4c2e-8254-58c110e4f277-2, groupId=385d2de3-e329-4c2e-8254-58c110e4f277] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) 08:24:02 policy-db-migrator | -------------- 08:24:02 kafka | num.partitions = 1 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.926835907Z level=info msg="Migration successfully executed" id="add index alert_notification_state org_id & alert_id & notifier_id" duration=984.431µs 08:24:02 policy-apex-pdp | [2024-04-26T08:22:05.036+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-385d2de3-e329-4c2e-8254-58c110e4f277-2, groupId=385d2de3-e329-4c2e-8254-58c110e4f277] (Re-)joining group 08:24:02 policy-db-migrator | 08:24:02 kafka | num.recovery.threads.per.data.dir = 1 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.929901706Z level=info msg="Executing migration" id="Add for to alert table" 08:24:02 policy-apex-pdp | [2024-04-26T08:22:05.412+00:00|INFO|GsonMessageBodyHandler|RestServerParameters-6969] Using GSON for REST calls 08:24:02 policy-db-migrator | 08:24:02 kafka | num.replica.alter.log.dirs.threads = null 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.933988247Z level=info msg="Migration successfully executed" id="Add for to alert table" duration=4.086521ms 08:24:02 policy-apex-pdp | [2024-04-26T08:22:05.414+00:00|INFO|YamlMessageBodyHandler|RestServerParameters-6969] Accepting YAML for REST calls 08:24:02 policy-db-migrator | > upgrade 0550-toscacapabilitytypes.sql 08:24:02 kafka | num.replica.fetchers = 1 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.939450538Z level=info msg="Executing migration" id="Add column uid in alert_notification" 08:24:02 policy-apex-pdp | [2024-04-26T08:22:08.040+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-385d2de3-e329-4c2e-8254-58c110e4f277-2, groupId=385d2de3-e329-4c2e-8254-58c110e4f277] Successfully joined group with generation Generation{generationId=1, memberId='consumer-385d2de3-e329-4c2e-8254-58c110e4f277-2-623aa870-0e4f-4435-b6a7-fae0c0299f99', protocol='range'} 08:24:02 policy-db-migrator | -------------- 08:24:02 kafka | offset.metadata.max.bytes = 4096 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.943242244Z level=info msg="Migration successfully executed" id="Add column uid in alert_notification" duration=3.791886ms 08:24:02 policy-apex-pdp | [2024-04-26T08:22:08.049+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-385d2de3-e329-4c2e-8254-58c110e4f277-2, groupId=385d2de3-e329-4c2e-8254-58c110e4f277] Finished assignment for group at generation 1: {consumer-385d2de3-e329-4c2e-8254-58c110e4f277-2-623aa870-0e4f-4435-b6a7-fae0c0299f99=Assignment(partitions=[policy-pdp-pap-0])} 08:24:02 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPES (name, version)) 08:24:02 kafka | offsets.commit.required.acks = -1 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.946077531Z level=info msg="Executing migration" id="Update uid column values in alert_notification" 08:24:02 policy-apex-pdp | [2024-04-26T08:22:08.057+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-385d2de3-e329-4c2e-8254-58c110e4f277-2, groupId=385d2de3-e329-4c2e-8254-58c110e4f277] Successfully synced group in generation Generation{generationId=1, memberId='consumer-385d2de3-e329-4c2e-8254-58c110e4f277-2-623aa870-0e4f-4435-b6a7-fae0c0299f99', protocol='range'} 08:24:02 policy-db-migrator | -------------- 08:24:02 kafka | offsets.commit.timeout.ms = 5000 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.946303802Z level=info msg="Migration successfully executed" id="Update uid column values in alert_notification" duration=226.062µs 08:24:02 policy-apex-pdp | [2024-04-26T08:22:08.057+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-385d2de3-e329-4c2e-8254-58c110e4f277-2, groupId=385d2de3-e329-4c2e-8254-58c110e4f277] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) 08:24:02 policy-db-migrator | 08:24:02 kafka | offsets.load.buffer.size = 5242880 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.949107497Z level=info msg="Executing migration" id="Add unique index alert_notification_org_id_uid" 08:24:02 policy-apex-pdp | [2024-04-26T08:22:08.058+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-385d2de3-e329-4c2e-8254-58c110e4f277-2, groupId=385d2de3-e329-4c2e-8254-58c110e4f277] Adding newly assigned partitions: policy-pdp-pap-0 08:24:02 policy-db-migrator | 08:24:02 kafka | offsets.retention.check.interval.ms = 600000 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.950758521Z level=info msg="Migration successfully executed" id="Add unique index alert_notification_org_id_uid" duration=1.651285ms 08:24:02 policy-apex-pdp | [2024-04-26T08:22:08.066+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-385d2de3-e329-4c2e-8254-58c110e4f277-2, groupId=385d2de3-e329-4c2e-8254-58c110e4f277] Found no committed offset for partition policy-pdp-pap-0 08:24:02 policy-db-migrator | > upgrade 0560-toscacapabilitytypes_toscacapabilitytype.sql 08:24:02 kafka | offsets.retention.minutes = 10080 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.957017315Z level=info msg="Executing migration" id="Remove unique index org_id_name" 08:24:02 policy-apex-pdp | [2024-04-26T08:22:08.077+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-385d2de3-e329-4c2e-8254-58c110e4f277-2, groupId=385d2de3-e329-4c2e-8254-58c110e4f277] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. 08:24:02 policy-db-migrator | -------------- 08:24:02 kafka | offsets.topic.compression.codec = 0 08:24:02 policy-pap | check.crcs = true 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.958448488Z level=info msg="Migration successfully executed" id="Remove unique index org_id_name" duration=1.435924ms 08:24:02 policy-apex-pdp | [2024-04-26T08:22:24.682+00:00|INFO|network|Timer-0] [OUT|KAFKA|policy-pdp-pap] 08:24:02 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytypes_toscacapabilitytype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPES_TOSCACAPABILITYTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 08:24:02 kafka | offsets.topic.num.partitions = 50 08:24:02 policy-pap | client.dns.lookup = use_all_dns_ips 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.965503573Z level=info msg="Executing migration" id="Add column secure_settings in alert_notification" 08:24:02 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"7c7dae4d-eb28-477f-a313-371e5e410caf","timestampMs":1714119744682,"name":"apex-b183f0da-bf00-44a3-b3ae-398d8035a48d","pdpGroup":"defaultGroup"} 08:24:02 policy-db-migrator | -------------- 08:24:02 kafka | offsets.topic.replication.factor = 1 08:24:02 policy-pap | client.id = consumer-policy-pap-2 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.969338391Z level=info msg="Migration successfully executed" id="Add column secure_settings in alert_notification" duration=3.835208ms 08:24:02 policy-apex-pdp | [2024-04-26T08:22:24.706+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 08:24:02 policy-db-migrator | 08:24:02 kafka | offsets.topic.segment.bytes = 104857600 08:24:02 policy-pap | client.rack = 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.972322365Z level=info msg="Executing migration" id="alter alert.settings to mediumtext" 08:24:02 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"7c7dae4d-eb28-477f-a313-371e5e410caf","timestampMs":1714119744682,"name":"apex-b183f0da-bf00-44a3-b3ae-398d8035a48d","pdpGroup":"defaultGroup"} 08:24:02 policy-db-migrator | 08:24:02 kafka | password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding 08:24:02 policy-pap | connections.max.idle.ms = 540000 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.972418179Z level=info msg="Migration successfully executed" id="alter alert.settings to mediumtext" duration=91.955µs 08:24:02 policy-apex-pdp | [2024-04-26T08:22:24.708+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 08:24:02 policy-db-migrator | > upgrade 0570-toscadatatype.sql 08:24:02 kafka | password.encoder.iterations = 4096 08:24:02 policy-pap | default.api.timeout.ms = 60000 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.979554509Z level=info msg="Executing migration" id="Add non-unique index alert_notification_state_alert_id" 08:24:02 policy-apex-pdp | [2024-04-26T08:22:24.838+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 08:24:02 policy-db-migrator | -------------- 08:24:02 kafka | password.encoder.key.length = 128 08:24:02 policy-pap | enable.auto.commit = true 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.980900058Z level=info msg="Migration successfully executed" id="Add non-unique index alert_notification_state_alert_id" duration=1.35186ms 08:24:02 policy-apex-pdp | {"source":"pap-b4f6f8e5-f898-4e69-90e7-669877e7a07f","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"c089a3a3-4fc1-43c0-a7be-21299199c004","timestampMs":1714119744793,"name":"apex-b183f0da-bf00-44a3-b3ae-398d8035a48d","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 08:24:02 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCADATATYPE (name, version)) 08:24:02 kafka | password.encoder.keyfactory.algorithm = null 08:24:02 policy-pap | exclude.internal.topics = true 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.985050611Z level=info msg="Executing migration" id="Add non-unique index alert_rule_tag_alert_id" 08:24:02 policy-apex-pdp | [2024-04-26T08:22:24.850+00:00|WARN|Registry|KAFKA-source-policy-pdp-pap] replacing previously registered: object:pdp/status/publisher 08:24:02 policy-db-migrator | -------------- 08:24:02 kafka | password.encoder.old.secret = null 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.986056254Z level=info msg="Migration successfully executed" id="Add non-unique index alert_rule_tag_alert_id" duration=1.005083ms 08:24:02 policy-apex-pdp | [2024-04-26T08:22:24.850+00:00|INFO|network|Timer-1] [OUT|KAFKA|policy-pdp-pap] 08:24:02 policy-db-migrator | 08:24:02 kafka | password.encoder.secret = null 08:24:02 policy-pap | fetch.max.bytes = 52428800 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.99063998Z level=info msg="Executing migration" id="Drop old annotation table v4" 08:24:02 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"46151d45-9ff7-40dd-999c-96d4f36448f0","timestampMs":1714119744849,"name":"apex-b183f0da-bf00-44a3-b3ae-398d8035a48d","pdpGroup":"defaultGroup"} 08:24:02 policy-db-migrator | 08:24:02 kafka | principal.builder.class = class org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder 08:24:02 policy-pap | fetch.max.wait.ms = 500 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.990735956Z level=info msg="Migration successfully executed" id="Drop old annotation table v4" duration=97.036µs 08:24:02 policy-apex-pdp | [2024-04-26T08:22:24.850+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] 08:24:02 policy-db-migrator | > upgrade 0580-toscadatatypes.sql 08:24:02 kafka | process.roles = [] 08:24:02 policy-pap | fetch.min.bytes = 1 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.995162844Z level=info msg="Executing migration" id="create annotation table v5" 08:24:02 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"c089a3a3-4fc1-43c0-a7be-21299199c004","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"9e21325e-0675-4fb1-917c-73db7541fd22","timestampMs":1714119744850,"name":"apex-b183f0da-bf00-44a3-b3ae-398d8035a48d","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 08:24:02 policy-db-migrator | -------------- 08:24:02 kafka | producer.id.expiration.check.interval.ms = 600000 08:24:02 policy-pap | group.id = policy-pap 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:34.996668371Z level=info msg="Migration successfully executed" id="create annotation table v5" duration=1.504007ms 08:24:02 policy-apex-pdp | [2024-04-26T08:22:24.867+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 08:24:02 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCADATATYPES (name, version)) 08:24:02 kafka | producer.id.expiration.ms = 86400000 08:24:02 policy-pap | group.instance.id = null 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.00536081Z level=info msg="Executing migration" id="add index annotation 0 v3" 08:24:02 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"c089a3a3-4fc1-43c0-a7be-21299199c004","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"9e21325e-0675-4fb1-917c-73db7541fd22","timestampMs":1714119744850,"name":"apex-b183f0da-bf00-44a3-b3ae-398d8035a48d","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 08:24:02 policy-db-migrator | -------------- 08:24:02 kafka | producer.purgatory.purge.interval.requests = 1000 08:24:02 policy-pap | heartbeat.interval.ms = 3000 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.007069908Z level=info msg="Migration successfully executed" id="add index annotation 0 v3" duration=1.711749ms 08:24:02 policy-apex-pdp | [2024-04-26T08:22:24.867+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 08:24:02 policy-db-migrator | 08:24:02 kafka | queued.max.request.bytes = -1 08:24:02 policy-pap | interceptor.classes = [] 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.012465837Z level=info msg="Executing migration" id="add index annotation 1 v3" 08:24:02 policy-db-migrator | 08:24:02 policy-apex-pdp | [2024-04-26T08:22:24.873+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 08:24:02 kafka | queued.max.requests = 500 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.013910642Z level=info msg="Migration successfully executed" id="add index annotation 1 v3" duration=1.445235ms 08:24:02 policy-db-migrator | > upgrade 0590-toscadatatypes_toscadatatype.sql 08:24:02 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"46151d45-9ff7-40dd-999c-96d4f36448f0","timestampMs":1714119744849,"name":"apex-b183f0da-bf00-44a3-b3ae-398d8035a48d","pdpGroup":"defaultGroup"} 08:24:02 kafka | quota.window.num = 11 08:24:02 policy-pap | internal.leave.group.on.close = true 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.017619143Z level=info msg="Executing migration" id="add index annotation 2 v3" 08:24:02 policy-apex-pdp | [2024-04-26T08:22:24.874+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 08:24:02 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.018512199Z level=info msg="Migration successfully executed" id="add index annotation 2 v3" duration=889.345µs 08:24:02 policy-db-migrator | -------------- 08:24:02 kafka | quota.window.size.seconds = 1 08:24:02 policy-apex-pdp | [2024-04-26T08:22:24.894+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 08:24:02 policy-pap | isolation.level = read_uncommitted 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.023990432Z level=info msg="Executing migration" id="add index annotation 3 v3" 08:24:02 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatypes_toscadatatype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCADATATYPES_TOSCADATATYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 08:24:02 kafka | remote.log.index.file.cache.total.size.bytes = 1073741824 08:24:02 policy-apex-pdp | {"source":"pap-b4f6f8e5-f898-4e69-90e7-669877e7a07f","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"a5f1d0a2-79e5-4903-b04d-2fc825203dbc","timestampMs":1714119744793,"name":"apex-b183f0da-bf00-44a3-b3ae-398d8035a48d","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 08:24:02 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.025609115Z level=info msg="Migration successfully executed" id="add index annotation 3 v3" duration=1.618364ms 08:24:02 policy-db-migrator | -------------- 08:24:02 kafka | remote.log.manager.task.interval.ms = 30000 08:24:02 policy-apex-pdp | [2024-04-26T08:22:24.897+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] 08:24:02 policy-pap | max.partition.fetch.bytes = 1048576 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.028959118Z level=info msg="Executing migration" id="add index annotation 4 v3" 08:24:02 policy-db-migrator | 08:24:02 kafka | remote.log.manager.task.retry.backoff.max.ms = 30000 08:24:02 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"a5f1d0a2-79e5-4903-b04d-2fc825203dbc","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"a7018436-53c5-4d20-9150-d26e4bf63ebb","timestampMs":1714119744897,"name":"apex-b183f0da-bf00-44a3-b3ae-398d8035a48d","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 08:24:02 kafka | remote.log.manager.task.retry.backoff.ms = 500 08:24:02 policy-apex-pdp | [2024-04-26T08:22:24.905+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.030541599Z level=info msg="Migration successfully executed" id="add index annotation 4 v3" duration=1.581831ms 08:24:02 policy-db-migrator | 08:24:02 policy-pap | max.poll.interval.ms = 300000 08:24:02 kafka | remote.log.manager.task.retry.jitter = 0.2 08:24:02 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"a5f1d0a2-79e5-4903-b04d-2fc825203dbc","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"a7018436-53c5-4d20-9150-d26e4bf63ebb","timestampMs":1714119744897,"name":"apex-b183f0da-bf00-44a3-b3ae-398d8035a48d","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.038835478Z level=info msg="Executing migration" id="Update annotation table charset" 08:24:02 policy-db-migrator | > upgrade 0600-toscanodetemplate.sql 08:24:02 policy-pap | max.poll.records = 500 08:24:02 kafka | remote.log.manager.thread.pool.size = 10 08:24:02 policy-apex-pdp | [2024-04-26T08:22:24.905+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.038869859Z level=info msg="Migration successfully executed" id="Update annotation table charset" duration=38.792µs 08:24:02 policy-db-migrator | -------------- 08:24:02 policy-pap | metadata.max.age.ms = 300000 08:24:02 kafka | remote.log.metadata.custom.metadata.max.bytes = 128 08:24:02 policy-apex-pdp | [2024-04-26T08:22:24.943+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.046983968Z level=info msg="Executing migration" id="Add column region_id to annotation table" 08:24:02 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplate (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, capabilitiesName VARCHAR(120) NULL, capabilitiesVersion VARCHAR(20) NULL, requirementsVersion VARCHAR(20) NULL, requirementsName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCANODETEMPLATE (name, version)) 08:24:02 policy-pap | metric.reporters = [] 08:24:02 kafka | remote.log.metadata.manager.class.name = org.apache.kafka.server.log.remote.metadata.storage.TopicBasedRemoteLogMetadataManager 08:24:02 policy-apex-pdp | {"source":"pap-b4f6f8e5-f898-4e69-90e7-669877e7a07f","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"4a11c98e-6791-453e-9808-0827aeaec0c3","timestampMs":1714119744908,"name":"apex-b183f0da-bf00-44a3-b3ae-398d8035a48d","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.052785648Z level=info msg="Migration successfully executed" id="Add column region_id to annotation table" duration=5.793519ms 08:24:02 policy-db-migrator | -------------- 08:24:02 policy-pap | metrics.num.samples = 2 08:24:02 kafka | remote.log.metadata.manager.class.path = null 08:24:02 policy-apex-pdp | [2024-04-26T08:22:24.945+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.057609497Z level=info msg="Executing migration" id="Drop category_id index" 08:24:02 policy-db-migrator | 08:24:02 policy-pap | metrics.recording.level = INFO 08:24:02 kafka | remote.log.metadata.manager.impl.prefix = rlmm.config. 08:24:02 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"4a11c98e-6791-453e-9808-0827aeaec0c3","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"e3874a72-08db-45fe-aceb-34c903ea4e7e","timestampMs":1714119744944,"name":"apex-b183f0da-bf00-44a3-b3ae-398d8035a48d","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.05864157Z level=info msg="Migration successfully executed" id="Drop category_id index" duration=1.033713ms 08:24:02 policy-db-migrator | 08:24:02 policy-pap | metrics.sample.window.ms = 30000 08:24:02 kafka | remote.log.metadata.manager.listener.name = null 08:24:02 policy-apex-pdp | [2024-04-26T08:22:24.953+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.062574833Z level=info msg="Executing migration" id="Add column tags to annotation table" 08:24:02 policy-db-migrator | > upgrade 0610-toscanodetemplates.sql 08:24:02 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 08:24:02 kafka | remote.log.reader.max.pending.tasks = 100 08:24:02 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"4a11c98e-6791-453e-9808-0827aeaec0c3","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"e3874a72-08db-45fe-aceb-34c903ea4e7e","timestampMs":1714119744944,"name":"apex-b183f0da-bf00-44a3-b3ae-398d8035a48d","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.067986422Z level=info msg="Migration successfully executed" id="Add column tags to annotation table" duration=5.409119ms 08:24:02 policy-db-migrator | -------------- 08:24:02 policy-pap | receive.buffer.bytes = 65536 08:24:02 kafka | remote.log.reader.threads = 10 08:24:02 policy-apex-pdp | [2024-04-26T08:22:24.954+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.071542646Z level=info msg="Executing migration" id="Create annotation_tag table v2" 08:24:02 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplates (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCANODETEMPLATES (name, version)) 08:24:02 kafka | remote.log.storage.manager.class.name = null 08:24:02 policy-apex-pdp | [2024-04-26T08:22:56.149+00:00|INFO|RequestLog|qtp739264372-33] 172.17.0.3 - policyadmin [26/Apr/2024:08:22:56 +0000] "GET /metrics HTTP/1.1" 200 10650 "-" "Prometheus/2.51.2" 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.07219985Z level=info msg="Migration successfully executed" id="Create annotation_tag table v2" duration=656.064µs 08:24:02 kafka | remote.log.storage.manager.class.path = null 08:24:02 policy-apex-pdp | [2024-04-26T08:23:56.075+00:00|INFO|RequestLog|qtp739264372-28] 172.17.0.3 - policyadmin [26/Apr/2024:08:23:56 +0000] "GET /metrics HTTP/1.1" 200 10649 "-" "Prometheus/2.51.2" 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.076563225Z level=info msg="Executing migration" id="Add unique index annotation_tag.annotation_id_tag_id" 08:24:02 policy-db-migrator | -------------- 08:24:02 policy-pap | reconnect.backoff.max.ms = 1000 08:24:02 kafka | remote.log.storage.manager.impl.prefix = rsm.config. 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.077558917Z level=info msg="Migration successfully executed" id="Add unique index annotation_tag.annotation_id_tag_id" duration=994.202µs 08:24:02 policy-db-migrator | 08:24:02 policy-pap | reconnect.backoff.ms = 50 08:24:02 kafka | remote.log.storage.system.enable = false 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.081027796Z level=info msg="Executing migration" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" 08:24:02 policy-db-migrator | 08:24:02 policy-pap | request.timeout.ms = 30000 08:24:02 kafka | replica.fetch.backoff.ms = 1000 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.082194395Z level=info msg="Migration successfully executed" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" duration=1.170309ms 08:24:02 policy-db-migrator | > upgrade 0620-toscanodetemplates_toscanodetemplate.sql 08:24:02 policy-pap | retry.backoff.ms = 100 08:24:02 kafka | replica.fetch.max.bytes = 1048576 08:24:02 policy-db-migrator | -------------- 08:24:02 policy-pap | sasl.client.callback.handler.class = null 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.091663205Z level=info msg="Executing migration" id="Rename table annotation_tag to annotation_tag_v2 - v2" 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.103063323Z level=info msg="Migration successfully executed" id="Rename table annotation_tag to annotation_tag_v2 - v2" duration=11.381177ms 08:24:02 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplates_toscanodetemplate (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCANODETEMPLATES_TOSCANODETEMPLATE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 08:24:02 policy-pap | sasl.jaas.config = null 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.108012948Z level=info msg="Executing migration" id="Create annotation_tag table v3" 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.109082814Z level=info msg="Migration successfully executed" id="Create annotation_tag table v3" duration=1.073746ms 08:24:02 policy-db-migrator | -------------- 08:24:02 kafka | replica.fetch.min.bytes = 1 08:24:02 policy-db-migrator | 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.114201838Z level=info msg="Executing migration" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" 08:24:02 kafka | replica.fetch.response.max.bytes = 10485760 08:24:02 policy-db-migrator | 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.115200979Z level=info msg="Migration successfully executed" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" duration=998.981µs 08:24:02 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 08:24:02 kafka | replica.fetch.wait.max.ms = 500 08:24:02 policy-db-migrator | > upgrade 0630-toscanodetype.sql 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.121451632Z level=info msg="Executing migration" id="copy annotation_tag v2 to v3" 08:24:02 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 08:24:02 kafka | replica.high.watermark.checkpoint.interval.ms = 5000 08:24:02 policy-db-migrator | -------------- 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.121833742Z level=info msg="Migration successfully executed" id="copy annotation_tag v2 to v3" duration=387.521µs 08:24:02 policy-pap | sasl.kerberos.service.name = null 08:24:02 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, requirementsVersion VARCHAR(20) NULL, requirementsName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCANODETYPE (name, version)) 08:24:02 kafka | replica.lag.time.max.ms = 30000 08:24:02 kafka | replica.selector.class = null 08:24:02 kafka | replica.socket.receive.buffer.bytes = 65536 08:24:02 kafka | replica.socket.timeout.ms = 30000 08:24:02 policy-db-migrator | -------------- 08:24:02 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 08:24:02 kafka | replication.quota.window.num = 11 08:24:02 kafka | replication.quota.window.size.seconds = 1 08:24:02 policy-db-migrator | 08:24:02 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 08:24:02 kafka | request.timeout.ms = 30000 08:24:02 kafka | reserved.broker.max.id = 1000 08:24:02 policy-db-migrator | 08:24:02 kafka | sasl.client.callback.handler.class = null 08:24:02 kafka | sasl.enabled.mechanisms = [GSSAPI] 08:24:02 policy-db-migrator | > upgrade 0640-toscanodetypes.sql 08:24:02 policy-pap | sasl.login.callback.handler.class = null 08:24:02 kafka | sasl.jaas.config = null 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.126134854Z level=info msg="Executing migration" id="drop table annotation_tag_v2" 08:24:02 policy-db-migrator | -------------- 08:24:02 policy-pap | sasl.login.class = null 08:24:02 kafka | sasl.kerberos.kinit.cmd = /usr/bin/kinit 08:24:02 kafka | sasl.kerberos.min.time.before.relogin = 60000 08:24:02 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCANODETYPES (name, version)) 08:24:02 policy-pap | sasl.login.connect.timeout.ms = null 08:24:02 kafka | sasl.kerberos.principal.to.local.rules = [DEFAULT] 08:24:02 kafka | sasl.kerberos.service.name = null 08:24:02 policy-db-migrator | -------------- 08:24:02 kafka | sasl.kerberos.ticket.renew.jitter = 0.05 08:24:02 kafka | sasl.kerberos.ticket.renew.window.factor = 0.8 08:24:02 policy-db-migrator | 08:24:02 kafka | sasl.login.callback.handler.class = null 08:24:02 kafka | sasl.login.class = null 08:24:02 policy-db-migrator | 08:24:02 kafka | sasl.login.connect.timeout.ms = null 08:24:02 kafka | sasl.login.read.timeout.ms = null 08:24:02 policy-db-migrator | > upgrade 0650-toscanodetypes_toscanodetype.sql 08:24:02 kafka | sasl.login.refresh.buffer.seconds = 300 08:24:02 kafka | sasl.login.refresh.min.period.seconds = 60 08:24:02 policy-db-migrator | -------------- 08:24:02 kafka | sasl.login.refresh.window.factor = 0.8 08:24:02 kafka | sasl.login.refresh.window.jitter = 0.05 08:24:02 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetypes_toscanodetype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCANODETYPES_TOSCANODETYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 08:24:02 policy-pap | sasl.login.read.timeout.ms = null 08:24:02 kafka | sasl.login.retry.backoff.max.ms = 10000 08:24:02 kafka | sasl.login.retry.backoff.ms = 100 08:24:02 policy-db-migrator | -------------- 08:24:02 policy-pap | sasl.login.refresh.buffer.seconds = 300 08:24:02 kafka | sasl.mechanism.controller.protocol = GSSAPI 08:24:02 kafka | sasl.mechanism.inter.broker.protocol = GSSAPI 08:24:02 policy-db-migrator | 08:24:02 policy-pap | sasl.login.refresh.min.period.seconds = 60 08:24:02 kafka | sasl.oauthbearer.clock.skew.seconds = 30 08:24:02 kafka | sasl.oauthbearer.expected.audience = null 08:24:02 policy-db-migrator | 08:24:02 kafka | sasl.oauthbearer.expected.issuer = null 08:24:02 kafka | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 08:24:02 policy-db-migrator | > upgrade 0660-toscaparameter.sql 08:24:02 policy-pap | sasl.login.refresh.window.factor = 0.8 08:24:02 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 08:24:02 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 08:24:02 policy-db-migrator | -------------- 08:24:02 policy-pap | sasl.login.refresh.window.jitter = 0.05 08:24:02 kafka | sasl.oauthbearer.jwks.endpoint.url = null 08:24:02 kafka | sasl.oauthbearer.scope.claim.name = scope 08:24:02 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaparameter (VALUE VARCHAR(255) NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPARAMETER (parentLocalName, localName, parentKeyVersion, parentKeyName)) 08:24:02 kafka | sasl.oauthbearer.sub.claim.name = sub 08:24:02 kafka | sasl.oauthbearer.token.endpoint.url = null 08:24:02 policy-db-migrator | -------------- 08:24:02 kafka | sasl.server.callback.handler.class = null 08:24:02 kafka | sasl.server.max.receive.size = 524288 08:24:02 kafka | security.inter.broker.protocol = PLAINTEXT 08:24:02 kafka | security.providers = null 08:24:02 policy-db-migrator | 08:24:02 kafka | server.max.startup.time.ms = 9223372036854775807 08:24:02 kafka | socket.connection.setup.timeout.max.ms = 30000 08:24:02 policy-db-migrator | 08:24:02 policy-pap | sasl.login.retry.backoff.max.ms = 10000 08:24:02 kafka | socket.connection.setup.timeout.ms = 10000 08:24:02 kafka | socket.listen.backlog.size = 50 08:24:02 policy-db-migrator | > upgrade 0670-toscapolicies.sql 08:24:02 policy-pap | sasl.login.retry.backoff.ms = 100 08:24:02 kafka | socket.receive.buffer.bytes = 102400 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.126852471Z level=info msg="Migration successfully executed" id="drop table annotation_tag_v2" duration=717.717µs 08:24:02 policy-db-migrator | -------------- 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.130638176Z level=info msg="Executing migration" id="Update alert annotations and set TEXT to empty" 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.131071308Z level=info msg="Migration successfully executed" id="Update alert annotations and set TEXT to empty" duration=433.903µs 08:24:02 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicies (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICIES (name, version)) 08:24:02 kafka | socket.request.max.bytes = 104857600 08:24:02 policy-pap | sasl.mechanism = GSSAPI 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.136097188Z level=info msg="Executing migration" id="Add created time to annotation table" 08:24:02 policy-db-migrator | -------------- 08:24:02 kafka | socket.send.buffer.bytes = 102400 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.143627486Z level=info msg="Migration successfully executed" id="Add created time to annotation table" duration=7.525549ms 08:24:02 policy-db-migrator | 08:24:02 kafka | ssl.cipher.suites = [] 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.146743377Z level=info msg="Executing migration" id="Add updated time to annotation table" 08:24:02 policy-db-migrator | 08:24:02 kafka | ssl.client.auth = none 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.149883629Z level=info msg="Migration successfully executed" id="Add updated time to annotation table" duration=3.139002ms 08:24:02 policy-db-migrator | > upgrade 0680-toscapolicies_toscapolicy.sql 08:24:02 kafka | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.152638452Z level=info msg="Executing migration" id="Add index for created in annotation table" 08:24:02 policy-db-migrator | -------------- 08:24:02 kafka | ssl.endpoint.identification.algorithm = https 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.153613212Z level=info msg="Migration successfully executed" id="Add index for created in annotation table" duration=974.669µs 08:24:02 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicies_toscapolicy (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPOLICIES_TOSCAPOLICY (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 08:24:02 kafka | ssl.engine.factory.class = null 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.158520625Z level=info msg="Executing migration" id="Add index for updated in annotation table" 08:24:02 policy-db-migrator | -------------- 08:24:02 kafka | ssl.key.password = null 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.159596001Z level=info msg="Migration successfully executed" id="Add index for updated in annotation table" duration=1.075106ms 08:24:02 policy-db-migrator | 08:24:02 kafka | ssl.keymanager.algorithm = SunX509 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.16288736Z level=info msg="Executing migration" id="Convert existing annotations from seconds to milliseconds" 08:24:02 policy-db-migrator | 08:24:02 kafka | ssl.keystore.certificate.chain = null 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.163192926Z level=info msg="Migration successfully executed" id="Convert existing annotations from seconds to milliseconds" duration=305.826µs 08:24:02 policy-db-migrator | > upgrade 0690-toscapolicy.sql 08:24:02 kafka | ssl.keystore.key = null 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.168667839Z level=info msg="Executing migration" id="Add epoch_end column" 08:24:02 policy-db-migrator | -------------- 08:24:02 kafka | ssl.keystore.location = null 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.172704167Z level=info msg="Migration successfully executed" id="Add epoch_end column" duration=4.036398ms 08:24:02 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicy (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCAPOLICY (name, version)) 08:24:02 kafka | ssl.keystore.password = null 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.175840939Z level=info msg="Executing migration" id="Add index for epoch_end" 08:24:02 policy-db-migrator | -------------- 08:24:02 kafka | ssl.keystore.type = JKS 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.176646731Z level=info msg="Migration successfully executed" id="Add index for epoch_end" duration=805.971µs 08:24:02 policy-db-migrator | 08:24:02 kafka | ssl.principal.mapping.rules = DEFAULT 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.182118833Z level=info msg="Executing migration" id="Make epoch_end the same as epoch" 08:24:02 policy-db-migrator | 08:24:02 kafka | ssl.protocol = TLSv1.3 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.182337474Z level=info msg="Migration successfully executed" id="Make epoch_end the same as epoch" duration=218.671µs 08:24:02 policy-db-migrator | > upgrade 0700-toscapolicytype.sql 08:24:02 kafka | ssl.provider = null 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.186092469Z level=info msg="Executing migration" id="Move region to single row" 08:24:02 policy-db-migrator | -------------- 08:24:02 kafka | ssl.secure.random.implementation = null 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.186492339Z level=info msg="Migration successfully executed" id="Move region to single row" duration=399.791µs 08:24:02 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICYTYPE (name, version)) 08:24:02 kafka | ssl.trustmanager.algorithm = PKIX 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.193008935Z level=info msg="Executing migration" id="Remove index org_id_epoch from annotation table" 08:24:02 policy-db-migrator | -------------- 08:24:02 kafka | ssl.truststore.certificates = null 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.193764525Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch from annotation table" duration=756.219µs 08:24:02 policy-db-migrator | 08:24:02 kafka | ssl.truststore.location = null 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.198621604Z level=info msg="Executing migration" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" 08:24:02 policy-db-migrator | 08:24:02 kafka | ssl.truststore.password = null 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.199331902Z level=info msg="Migration successfully executed" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" duration=709.678µs 08:24:02 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 08:24:02 kafka | ssl.truststore.type = JKS 08:24:02 policy-db-migrator | > upgrade 0710-toscapolicytypes.sql 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.20279141Z level=info msg="Executing migration" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" 08:24:02 kafka | transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000 08:24:02 policy-db-migrator | -------------- 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.203770811Z level=info msg="Migration successfully executed" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" duration=978.942µs 08:24:02 kafka | transaction.max.timeout.ms = 900000 08:24:02 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICYTYPES (name, version)) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.207902924Z level=info msg="Executing migration" id="Add index for org_id_epoch_end_epoch on annotation table" 08:24:02 kafka | transaction.partition.verification.enable = true 08:24:02 policy-db-migrator | -------------- 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.20880512Z level=info msg="Migration successfully executed" id="Add index for org_id_epoch_end_epoch on annotation table" duration=901.546µs 08:24:02 kafka | transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 08:24:02 policy-db-migrator | 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.212969605Z level=info msg="Executing migration" id="Remove index org_id_epoch_epoch_end from annotation table" 08:24:02 kafka | transaction.state.log.load.buffer.size = 5242880 08:24:02 policy-db-migrator | 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.214351157Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch_epoch_end from annotation table" duration=1.381222ms 08:24:02 kafka | transaction.state.log.min.isr = 2 08:24:02 policy-db-migrator | > upgrade 0720-toscapolicytypes_toscapolicytype.sql 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.21790351Z level=info msg="Executing migration" id="Add index for alert_id on annotation table" 08:24:02 kafka | transaction.state.log.num.partitions = 50 08:24:02 policy-db-migrator | -------------- 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.219004507Z level=info msg="Migration successfully executed" id="Add index for alert_id on annotation table" duration=1.104997ms 08:24:02 kafka | transaction.state.log.replication.factor = 3 08:24:02 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytypes_toscapolicytype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPOLICYTYPES_TOSCAPOLICYTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.222119348Z level=info msg="Executing migration" id="Increase tags column to length 4096" 08:24:02 kafka | transaction.state.log.segment.bytes = 104857600 08:24:02 policy-db-migrator | -------------- 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.222186321Z level=info msg="Migration successfully executed" id="Increase tags column to length 4096" duration=67.544µs 08:24:02 kafka | transactional.id.expiration.ms = 604800000 08:24:02 policy-db-migrator | 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.228337759Z level=info msg="Executing migration" id="create test_data table" 08:24:02 kafka | unclean.leader.election.enable = false 08:24:02 policy-db-migrator | 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.230213826Z level=info msg="Migration successfully executed" id="create test_data table" duration=1.875668ms 08:24:02 kafka | unstable.api.versions.enable = false 08:24:02 policy-db-migrator | > upgrade 0730-toscaproperty.sql 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.236959463Z level=info msg="Executing migration" id="create dashboard_version table v1" 08:24:02 kafka | zookeeper.clientCnxnSocket = null 08:24:02 policy-db-migrator | -------------- 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.238653541Z level=info msg="Migration successfully executed" id="create dashboard_version table v1" duration=1.702528ms 08:24:02 kafka | zookeeper.connect = zookeeper:2181 08:24:02 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaproperty (DEFAULTVALUE VARCHAR(255) DEFAULT NULL, `DESCRIPTION` VARCHAR(255) DEFAULT NULL, ENTRYSCHEMA LONGBLOB DEFAULT NULL, REQUIRED BOOLEAN DEFAULT 0, STATUS INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PRIMARY KEY PK_TOSCAPROPERTY (parentLocalName, localName, parentKeyVersion, parentKeyName)) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.243481451Z level=info msg="Executing migration" id="add index dashboard_version.dashboard_id" 08:24:02 kafka | zookeeper.connection.timeout.ms = null 08:24:02 policy-db-migrator | -------------- 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.2450202Z level=info msg="Migration successfully executed" id="add index dashboard_version.dashboard_id" duration=1.53888ms 08:24:02 policy-pap | sasl.oauthbearer.expected.audience = null 08:24:02 kafka | zookeeper.max.in.flight.requests = 10 08:24:02 policy-db-migrator | 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.248668148Z level=info msg="Executing migration" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" 08:24:02 kafka | zookeeper.metadata.migration.enable = false 08:24:02 kafka | zookeeper.metadata.migration.min.batch.size = 200 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.249780326Z level=info msg="Migration successfully executed" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" duration=1.112898ms 08:24:02 policy-db-migrator | 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.254806075Z level=info msg="Executing migration" id="Set dashboard version to 1 where 0" 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.254997175Z level=info msg="Migration successfully executed" id="Set dashboard version to 1 where 0" duration=192.189µs 08:24:02 policy-pap | sasl.oauthbearer.expected.issuer = null 08:24:02 policy-db-migrator | > upgrade 0740-toscarelationshiptype.sql 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.258270794Z level=info msg="Executing migration" id="save existing dashboard data in dashboard_version table v1" 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.258650653Z level=info msg="Migration successfully executed" id="save existing dashboard data in dashboard_version table v1" duration=379.649µs 08:24:02 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 08:24:02 policy-db-migrator | -------------- 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.262110121Z level=info msg="Executing migration" id="alter dashboard_version.data to mediumtext v1" 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.262178405Z level=info msg="Migration successfully executed" id="alter dashboard_version.data to mediumtext v1" duration=66.764µs 08:24:02 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPE (name, version)) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.266448885Z level=info msg="Executing migration" id="create team table" 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.266997864Z level=info msg="Migration successfully executed" id="create team table" duration=548.669µs 08:24:02 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 08:24:02 policy-db-migrator | -------------- 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.276111764Z level=info msg="Executing migration" id="add index team.org_id" 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.277151288Z level=info msg="Migration successfully executed" id="add index team.org_id" duration=1.039604ms 08:24:02 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 08:24:02 policy-db-migrator | 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.280389115Z level=info msg="Executing migration" id="add unique index team_org_id_name" 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.28184931Z level=info msg="Migration successfully executed" id="add unique index team_org_id_name" duration=1.460035ms 08:24:02 policy-db-migrator | 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.2855186Z level=info msg="Executing migration" id="Add column uid in team" 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.290637714Z level=info msg="Migration successfully executed" id="Add column uid in team" duration=5.120164ms 08:24:02 policy-db-migrator | > upgrade 0750-toscarelationshiptypes.sql 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.301173188Z level=info msg="Executing migration" id="Update uid column values in team" 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.301838732Z level=info msg="Migration successfully executed" id="Update uid column values in team" duration=673.485µs 08:24:02 policy-db-migrator | -------------- 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.309323499Z level=info msg="Executing migration" id="Add unique index team_org_id_uid" 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.310658208Z level=info msg="Migration successfully executed" id="Add unique index team_org_id_uid" duration=1.334179ms 08:24:02 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPES (name, version)) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.32098604Z level=info msg="Executing migration" id="create team member table" 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.321860506Z level=info msg="Migration successfully executed" id="create team member table" duration=874.546µs 08:24:02 policy-db-migrator | -------------- 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.328674837Z level=info msg="Executing migration" id="add index team_member.org_id" 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.329901551Z level=info msg="Migration successfully executed" id="add index team_member.org_id" duration=1.226433ms 08:24:02 policy-db-migrator | 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.336581736Z level=info msg="Executing migration" id="add unique index team_member_org_id_team_id_user_id" 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.337410258Z level=info msg="Migration successfully executed" id="add unique index team_member_org_id_team_id_user_id" duration=828.002µs 08:24:02 policy-db-migrator | 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.341434075Z level=info msg="Executing migration" id="add index team_member.team_id" 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.342172534Z level=info msg="Migration successfully executed" id="add index team_member.team_id" duration=738.019µs 08:24:02 policy-db-migrator | > upgrade 0760-toscarelationshiptypes_toscarelationshiptype.sql 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.346228793Z level=info msg="Executing migration" id="Add column email to team table" 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.351163008Z level=info msg="Migration successfully executed" id="Add column email to team table" duration=4.933104ms 08:24:02 policy-db-migrator | -------------- 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.354571394Z level=info msg="Executing migration" id="Add column external to team_member table" 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.359249185Z level=info msg="Migration successfully executed" id="Add column external to team_member table" duration=4.676682ms 08:24:02 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptypes_toscarelationshiptype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPES_TOSCARELATIONSHIPTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.362648841Z level=info msg="Executing migration" id="Add column permission to team_member table" 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.367219356Z level=info msg="Migration successfully executed" id="Add column permission to team_member table" duration=4.569285ms 08:24:02 policy-db-migrator | -------------- 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.371254065Z level=info msg="Executing migration" id="create dashboard acl table" 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.372193723Z level=info msg="Migration successfully executed" id="create dashboard acl table" duration=939.378µs 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.378241185Z level=info msg="Executing migration" id="add index dashboard_acl_dashboard_id" 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.379312801Z level=info msg="Migration successfully executed" id="add index dashboard_acl_dashboard_id" duration=1.071856ms 08:24:02 policy-db-migrator | 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.384590113Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_user_id" 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.385778295Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_user_id" duration=1.187292ms 08:24:02 policy-db-migrator | 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.397247756Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_team_id" 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.399407438Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_team_id" duration=2.159861ms 08:24:02 policy-db-migrator | > upgrade 0770-toscarequirement.sql 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.404619787Z level=info msg="Executing migration" id="add index dashboard_acl_user_id" 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.406551996Z level=info msg="Migration successfully executed" id="add index dashboard_acl_user_id" duration=1.934799ms 08:24:02 policy-db-migrator | -------------- 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.409834636Z level=info msg="Executing migration" id="add index dashboard_acl_team_id" 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.410755693Z level=info msg="Migration successfully executed" id="add index dashboard_acl_team_id" duration=920.407µs 08:24:02 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirement (CAPABILITY VARCHAR(255) NULL, `DESCRIPTION` VARCHAR(255) NULL, NODE VARCHAR(255) NULL, RELATIONSHIP VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCAREQUIREMENT (name, version)) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.41513748Z level=info msg="Executing migration" id="add index dashboard_acl_org_id_role" 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.41631047Z level=info msg="Migration successfully executed" id="add index dashboard_acl_org_id_role" duration=1.17156ms 08:24:02 policy-db-migrator | -------------- 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.419678154Z level=info msg="Executing migration" id="add index dashboard_permission" 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.420943739Z level=info msg="Migration successfully executed" id="add index dashboard_permission" duration=1.264905ms 08:24:02 policy-db-migrator | 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.424517554Z level=info msg="Executing migration" id="save default acl rules in dashboard_acl table" 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.424967027Z level=info msg="Migration successfully executed" id="save default acl rules in dashboard_acl table" duration=448.723µs 08:24:02 policy-db-migrator | 08:24:02 kafka | zookeeper.session.timeout.ms = 18000 08:24:02 policy-db-migrator | > upgrade 0780-toscarequirements.sql 08:24:02 kafka | zookeeper.set.acl = false 08:24:02 kafka | zookeeper.ssl.cipher.suites = null 08:24:02 policy-db-migrator | -------------- 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.4294844Z level=info msg="Executing migration" id="delete acl rules for deleted dashboards and folders" 08:24:02 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirements (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAREQUIREMENTS (name, version)) 08:24:02 kafka | zookeeper.ssl.client.enable = false 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.429825078Z level=info msg="Migration successfully executed" id="delete acl rules for deleted dashboards and folders" duration=340.838µs 08:24:02 policy-db-migrator | -------------- 08:24:02 kafka | zookeeper.ssl.crl.enable = false 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.435144382Z level=info msg="Executing migration" id="create tag table" 08:24:02 policy-db-migrator | 08:24:02 kafka | zookeeper.ssl.enabled.protocols = null 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.436866492Z level=info msg="Migration successfully executed" id="create tag table" duration=1.721449ms 08:24:02 policy-db-migrator | 08:24:02 kafka | zookeeper.ssl.endpoint.identification.algorithm = HTTPS 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.446247415Z level=info msg="Executing migration" id="add index tag.key_value" 08:24:02 policy-db-migrator | > upgrade 0790-toscarequirements_toscarequirement.sql 08:24:02 kafka | zookeeper.ssl.keystore.location = null 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.447464498Z level=info msg="Migration successfully executed" id="add index tag.key_value" duration=1.218973ms 08:24:02 policy-db-migrator | -------------- 08:24:02 kafka | zookeeper.ssl.keystore.password = null 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.455161456Z level=info msg="Executing migration" id="create login attempt table" 08:24:02 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirements_toscarequirement (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAREQUIREMENTS_TOSCAREQUIREMENT (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 08:24:02 kafka | zookeeper.ssl.keystore.type = null 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.456112114Z level=info msg="Migration successfully executed" id="create login attempt table" duration=950.928µs 08:24:02 policy-db-migrator | -------------- 08:24:02 kafka | zookeeper.ssl.ocsp.enable = false 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.460028527Z level=info msg="Executing migration" id="add index login_attempt.username" 08:24:02 policy-db-migrator | 08:24:02 kafka | zookeeper.ssl.protocol = TLSv1.2 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.460940994Z level=info msg="Migration successfully executed" id="add index login_attempt.username" duration=912.517µs 08:24:02 policy-db-migrator | 08:24:02 kafka | zookeeper.ssl.truststore.location = null 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.465191013Z level=info msg="Executing migration" id="drop index IDX_login_attempt_username - v1" 08:24:02 policy-db-migrator | > upgrade 0800-toscaservicetemplate.sql 08:24:02 kafka | zookeeper.ssl.truststore.password = null 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.466239848Z level=info msg="Migration successfully executed" id="drop index IDX_login_attempt_username - v1" duration=1.049415ms 08:24:02 policy-db-migrator | -------------- 08:24:02 kafka | zookeeper.ssl.truststore.type = null 08:24:02 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaservicetemplate (`DESCRIPTION` VARCHAR(255) NULL, TOSCADEFINITIONSVERSION VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, capabilityTypesVersion VARCHAR(20) NULL, capabilityTypesName VARCHAR(120) NULL, dataTypesName VARCHAR(120) NULL, dataTypesVersion VARCHAR(20) NULL, nodeTypesVersion VARCHAR(20) NULL, nodeTypesName VARCHAR(120) NULL, policyTypesName VARCHAR(120) NULL, policyTypesVersion VARCHAR(20) NULL, relationshipTypesVersion VARCHAR(20) NULL, relationshipTypesName VARCHAR(120) NULL, topologyTemplateLocalName VARCHAR(120) NULL, topologyTemplateParentKeyName VARCHAR(120) NULL, topologyTemplateParentKeyVersion VARCHAR(15) NULL, topologyTemplateParentLocalName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCASERVICETEMPLATE (name, version)) 08:24:02 kafka | (kafka.server.KafkaConfig) 08:24:02 kafka | [2024-04-26 08:21:36,122] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 08:24:02 policy-db-migrator | -------------- 08:24:02 kafka | [2024-04-26 08:21:36,123] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 08:24:02 kafka | [2024-04-26 08:21:36,136] INFO [ThrottledChannelReaper-ControllerMutation]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 08:24:02 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 08:24:02 policy-db-migrator | 08:24:02 kafka | [2024-04-26 08:21:36,129] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 08:24:02 kafka | [2024-04-26 08:21:36,171] INFO Loading logs from log dirs ArraySeq(/var/lib/kafka/data) (kafka.log.LogManager) 08:24:02 policy-pap | sasl.oauthbearer.scope.claim.name = scope 08:24:02 policy-db-migrator | 08:24:02 kafka | [2024-04-26 08:21:36,175] INFO No logs found to be loaded in /var/lib/kafka/data (kafka.log.LogManager) 08:24:02 kafka | [2024-04-26 08:21:36,184] INFO Loaded 0 logs in 13ms (kafka.log.LogManager) 08:24:02 kafka | [2024-04-26 08:21:36,186] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager) 08:24:02 kafka | [2024-04-26 08:21:36,187] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager) 08:24:02 policy-db-migrator | > upgrade 0810-toscatopologytemplate.sql 08:24:02 kafka | [2024-04-26 08:21:36,200] INFO Starting the log cleaner (kafka.log.LogCleaner) 08:24:02 kafka | [2024-04-26 08:21:36,246] INFO [kafka-log-cleaner-thread-0]: Starting (kafka.log.LogCleaner$CleanerThread) 08:24:02 policy-db-migrator | -------------- 08:24:02 kafka | [2024-04-26 08:21:36,265] INFO [feature-zk-node-event-process-thread]: Starting (kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread) 08:24:02 kafka | [2024-04-26 08:21:36,282] INFO Feature ZK node at path: /feature does not exist (kafka.server.FinalizedFeatureChangeListener) 08:24:02 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscatopologytemplate (`description` VARCHAR(255) NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, nodeTemplatessVersion VARCHAR(20) NULL, nodeTemplatesName VARCHAR(120) NULL, policyVersion VARCHAR(20) NULL, policyName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCATOPOLOGYTEMPLATE (parentLocalName, localName, parentKeyVersion, parentKeyName)) 08:24:02 kafka | [2024-04-26 08:21:36,347] INFO [zk-broker-1-to-controller-forwarding-channel-manager]: Starting (kafka.server.BrokerToControllerRequestThread) 08:24:02 kafka | [2024-04-26 08:21:36,712] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) 08:24:02 policy-db-migrator | -------------- 08:24:02 kafka | [2024-04-26 08:21:36,734] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT) (kafka.network.SocketServer) 08:24:02 kafka | [2024-04-26 08:21:36,734] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) 08:24:02 policy-db-migrator | 08:24:02 kafka | [2024-04-26 08:21:36,740] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT_HOST) (kafka.network.SocketServer) 08:24:02 kafka | [2024-04-26 08:21:36,745] INFO [zk-broker-1-to-controller-alter-partition-channel-manager]: Starting (kafka.server.BrokerToControllerRequestThread) 08:24:02 policy-db-migrator | 08:24:02 kafka | [2024-04-26 08:21:36,770] INFO [ExpirationReaper-1-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 08:24:02 kafka | [2024-04-26 08:21:36,772] INFO [ExpirationReaper-1-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 08:24:02 policy-db-migrator | > upgrade 0820-toscatrigger.sql 08:24:02 kafka | [2024-04-26 08:21:36,775] INFO [ExpirationReaper-1-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 08:24:02 kafka | [2024-04-26 08:21:36,775] INFO [ExpirationReaper-1-ElectLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 08:24:02 policy-db-migrator | -------------- 08:24:02 policy-pap | sasl.oauthbearer.sub.claim.name = sub 08:24:02 kafka | [2024-04-26 08:21:36,777] INFO [ExpirationReaper-1-RemoteFetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 08:24:02 kafka | [2024-04-26 08:21:36,791] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler) 08:24:02 kafka | [2024-04-26 08:21:36,793] INFO [AddPartitionsToTxnSenderThread-1]: Starting (kafka.server.AddPartitionsToTxnManager) 08:24:02 kafka | [2024-04-26 08:21:36,824] INFO Creating /brokers/ids/1 (is it secure? false) (kafka.zk.KafkaZkClient) 08:24:02 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscatrigger (ACTION VARCHAR(255) NULL, toscaCondition LONGBLOB DEFAULT NULL, toscaConstraint LONGBLOB DEFAULT NULL, `DESCRIPTION` VARCHAR(255) NULL, EVALUATIONS INT DEFAULT NULL, EVENTTYPE VARCHAR(255) NULL, METHOD VARCHAR(255) NULL, `PERIOD` LONGBLOB DEFAULT NULL, SCHEDULE LONGBLOB DEFAULT NULL, TARGETFILTER LONGBLOB DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_TOSCATRIGGER (parentLocalName, localName, parentKeyVersion, parentKeyName)) 08:24:02 policy-pap | sasl.oauthbearer.token.endpoint.url = null 08:24:02 kafka | [2024-04-26 08:21:36,850] INFO Stat of the created znode at /brokers/ids/1 is: 27,27,1714119696840,1714119696840,1,0,0,72057609718923265,258,0,27 08:24:02 kafka | (kafka.zk.KafkaZkClient) 08:24:02 policy-db-migrator | -------------- 08:24:02 policy-pap | security.protocol = PLAINTEXT 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.471635855Z level=info msg="Executing migration" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" 08:24:02 kafka | [2024-04-26 08:21:36,852] INFO Registered broker 1 at path /brokers/ids/1 with addresses: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092, czxid (broker epoch): 27 (kafka.zk.KafkaZkClient) 08:24:02 policy-db-migrator | 08:24:02 policy-pap | security.providers = null 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.485975106Z level=info msg="Migration successfully executed" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" duration=14.332811ms 08:24:02 kafka | [2024-04-26 08:21:36,922] INFO [ControllerEventThread controllerId=1] Starting (kafka.controller.ControllerEventManager$ControllerEventThread) 08:24:02 policy-db-migrator | 08:24:02 policy-pap | send.buffer.bytes = 131072 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.48973645Z level=info msg="Executing migration" id="create login_attempt v2" 08:24:02 kafka | [2024-04-26 08:21:36,929] INFO [ExpirationReaper-1-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 08:24:02 policy-db-migrator | > upgrade 0830-FK_ToscaNodeTemplate_capabilitiesName.sql 08:24:02 policy-pap | session.timeout.ms = 45000 08:24:02 kafka | [2024-04-26 08:21:36,937] INFO [ExpirationReaper-1-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 08:24:02 policy-db-migrator | -------------- 08:24:02 policy-pap | socket.connection.setup.timeout.max.ms = 30000 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.490355412Z level=info msg="Migration successfully executed" id="create login_attempt v2" duration=614.702µs 08:24:02 kafka | [2024-04-26 08:21:36,937] INFO [ExpirationReaper-1-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 08:24:02 policy-db-migrator | CREATE INDEX FK_ToscaNodeTemplate_capabilitiesName ON toscanodetemplate(capabilitiesName, capabilitiesVersion) 08:24:02 policy-pap | socket.connection.setup.timeout.ms = 10000 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.494845594Z level=info msg="Executing migration" id="create index IDX_login_attempt_username - v2" 08:24:02 kafka | [2024-04-26 08:21:36,945] INFO Successfully created /controller_epoch with initial epoch 0 (kafka.zk.KafkaZkClient) 08:24:02 policy-db-migrator | -------------- 08:24:02 policy-pap | ssl.cipher.suites = null 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.495528849Z level=info msg="Migration successfully executed" id="create index IDX_login_attempt_username - v2" duration=683.265µs 08:24:02 kafka | [2024-04-26 08:21:36,957] INFO [Controller id=1] 1 successfully elected as the controller. Epoch incremented to 1 and epoch zk version is now 1 (kafka.controller.KafkaController) 08:24:02 policy-db-migrator | 08:24:02 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.500305275Z level=info msg="Executing migration" id="copy login_attempt v1 to v2" 08:24:02 kafka | [2024-04-26 08:21:36,961] INFO [Controller id=1] Creating FeatureZNode at path: /feature with contents: FeatureZNode(2,Enabled,Map()) (kafka.controller.KafkaController) 08:24:02 policy-db-migrator | 08:24:02 policy-pap | ssl.endpoint.identification.algorithm = https 08:24:02 kafka | [2024-04-26 08:21:36,963] INFO [GroupCoordinator 1]: Starting up. (kafka.coordinator.group.GroupCoordinator) 08:24:02 policy-db-migrator | > upgrade 0840-FK_ToscaNodeTemplate_requirementsName.sql 08:24:02 policy-pap | ssl.engine.factory.class = null 08:24:02 policy-db-migrator | -------------- 08:24:02 policy-pap | ssl.key.password = null 08:24:02 kafka | [2024-04-26 08:21:36,966] INFO Feature ZK node created at path: /feature (kafka.server.FinalizedFeatureChangeListener) 08:24:02 policy-db-migrator | CREATE INDEX FK_ToscaNodeTemplate_requirementsName ON toscanodetemplate(requirementsName, requirementsVersion) 08:24:02 policy-pap | ssl.keymanager.algorithm = SunX509 08:24:02 kafka | [2024-04-26 08:21:36,969] INFO [GroupCoordinator 1]: Startup complete. (kafka.coordinator.group.GroupCoordinator) 08:24:02 policy-db-migrator | -------------- 08:24:02 policy-pap | ssl.keystore.certificate.chain = null 08:24:02 kafka | [2024-04-26 08:21:36,988] INFO [TransactionCoordinator id=1] Starting up. (kafka.coordinator.transaction.TransactionCoordinator) 08:24:02 policy-db-migrator | 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.500912687Z level=info msg="Migration successfully executed" id="copy login_attempt v1 to v2" duration=606.961µs 08:24:02 kafka | [2024-04-26 08:21:36,991] INFO [TxnMarkerSenderThread-1]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager) 08:24:02 policy-pap | ssl.keystore.key = null 08:24:02 policy-db-migrator | 08:24:02 kafka | [2024-04-26 08:21:36,991] INFO [TransactionCoordinator id=1] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator) 08:24:02 policy-db-migrator | > upgrade 0850-FK_ToscaNodeType_requirementsName.sql 08:24:02 policy-pap | ssl.keystore.location = null 08:24:02 kafka | [2024-04-26 08:21:36,998] INFO [MetadataCache brokerId=1] Updated cache from existing None to latest Features(version=3.6-IV2, finalizedFeatures={}, finalizedFeaturesEpoch=0). (kafka.server.metadata.ZkMetadataCache) 08:24:02 policy-pap | ssl.keystore.password = null 08:24:02 policy-db-migrator | -------------- 08:24:02 kafka | [2024-04-26 08:21:36,998] INFO [Controller id=1] Registering handlers (kafka.controller.KafkaController) 08:24:02 policy-db-migrator | CREATE INDEX FK_ToscaNodeType_requirementsName ON toscanodetype(requirementsName, requirementsVersion) 08:24:02 policy-pap | ssl.keystore.type = JKS 08:24:02 kafka | [2024-04-26 08:21:37,006] INFO [Controller id=1] Deleting log dir event notifications (kafka.controller.KafkaController) 08:24:02 policy-pap | ssl.protocol = TLSv1.3 08:24:02 policy-db-migrator | -------------- 08:24:02 kafka | [2024-04-26 08:21:37,013] INFO [Controller id=1] Deleting isr change notifications (kafka.controller.KafkaController) 08:24:02 policy-db-migrator | 08:24:02 policy-pap | ssl.provider = null 08:24:02 policy-db-migrator | 08:24:02 policy-pap | ssl.secure.random.implementation = null 08:24:02 kafka | [2024-04-26 08:21:37,018] INFO [Controller id=1] Initializing controller context (kafka.controller.KafkaController) 08:24:02 policy-pap | ssl.trustmanager.algorithm = PKIX 08:24:02 policy-db-migrator | > upgrade 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql 08:24:02 kafka | [2024-04-26 08:21:37,035] INFO [ExpirationReaper-1-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 08:24:02 policy-pap | ssl.truststore.certificates = null 08:24:02 kafka | [2024-04-26 08:21:37,048] INFO [Controller id=1] Initialized broker epochs cache: HashMap(1 -> 27) (kafka.controller.KafkaController) 08:24:02 policy-db-migrator | -------------- 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.504432168Z level=info msg="Executing migration" id="drop login_attempt_tmp_qwerty" 08:24:02 policy-pap | ssl.truststore.location = null 08:24:02 kafka | [2024-04-26 08:21:37,063] DEBUG [Controller id=1] Register BrokerModifications handler for Set(1) (kafka.controller.KafkaController) 08:24:02 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_capabilityTypesName ON toscaservicetemplate(capabilityTypesName, capabilityTypesVersion) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.505059181Z level=info msg="Migration successfully executed" id="drop login_attempt_tmp_qwerty" duration=626.823µs 08:24:02 policy-pap | ssl.truststore.password = null 08:24:02 kafka | [2024-04-26 08:21:37,070] DEBUG [Channel manager on controller 1]: Controller 1 trying to connect to broker 1 (kafka.controller.ControllerChannelManager) 08:24:02 policy-db-migrator | -------------- 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.509409275Z level=info msg="Executing migration" id="create user auth table" 08:24:02 policy-pap | ssl.truststore.type = JKS 08:24:02 kafka | [2024-04-26 08:21:37,074] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread) 08:24:02 policy-db-migrator | 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.510213767Z level=info msg="Migration successfully executed" id="create user auth table" duration=803.902µs 08:24:02 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 08:24:02 kafka | [2024-04-26 08:21:37,086] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Enabling request processing. (kafka.network.SocketServer) 08:24:02 policy-db-migrator | 08:24:02 policy-pap | 08:24:02 policy-db-migrator | > upgrade 0870-FK_ToscaServiceTemplate_dataTypesName.sql 08:24:02 kafka | [2024-04-26 08:21:37,087] INFO [Controller id=1] Currently active brokers in the cluster: Set(1) (kafka.controller.KafkaController) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.514304618Z level=info msg="Executing migration" id="create index IDX_user_auth_auth_module_auth_id - v1" 08:24:02 policy-pap | [2024-04-26T08:22:01.138+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 08:24:02 policy-db-migrator | -------------- 08:24:02 kafka | [2024-04-26 08:21:37,087] INFO [Controller id=1] Currently shutting brokers in the cluster: HashSet() (kafka.controller.KafkaController) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.515276239Z level=info msg="Migration successfully executed" id="create index IDX_user_auth_auth_module_auth_id - v1" duration=969.71µs 08:24:02 policy-pap | [2024-04-26T08:22:01.138+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 08:24:02 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_dataTypesName ON toscaservicetemplate(dataTypesName, dataTypesVersion) 08:24:02 kafka | [2024-04-26 08:21:37,088] INFO [Controller id=1] Current list of topics in the cluster: HashSet() (kafka.controller.KafkaController) 08:24:02 policy-pap | [2024-04-26T08:22:01.138+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1714119721138 08:24:02 kafka | [2024-04-26 08:21:37,088] INFO [Controller id=1] Fetching topic deletions in progress (kafka.controller.KafkaController) 08:24:02 policy-db-migrator | -------------- 08:24:02 policy-pap | [2024-04-26T08:22:01.139+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-policy-pap-2, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap 08:24:02 policy-db-migrator | 08:24:02 kafka | [2024-04-26 08:21:37,089] INFO [RequestSendThread controllerId=1] Starting (kafka.controller.RequestSendThread) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.522297121Z level=info msg="Executing migration" id="alter user_auth.auth_id to length 190" 08:24:02 policy-pap | [2024-04-26T08:22:01.507+00:00|INFO|PapDatabaseInitializer|main] Created initial pdpGroup in DB - PdpGroups(groups=[PdpGroup(name=defaultGroup, description=The default group that registers all supported policy types and pdps., pdpGroupState=ACTIVE, properties=null, pdpSubgroups=[PdpSubGroup(pdpType=apex, supportedPolicyTypes=[onap.policies.controlloop.operational.common.Apex 1.0.0, onap.policies.native.Apex 1.0.0], policies=[], currentInstanceCount=0, desiredInstanceCount=1, properties=null, pdpInstances=null)])]) from /opt/app/policy/pap/etc/mounted/groups.json 08:24:02 policy-db-migrator | 08:24:02 kafka | [2024-04-26 08:21:37,091] INFO [Controller id=1] List of topics to be deleted: (kafka.controller.KafkaController) 08:24:02 policy-db-migrator | > upgrade 0880-FK_ToscaServiceTemplate_nodeTypesName.sql 08:24:02 policy-pap | [2024-04-26T08:22:01.721+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning 08:24:02 kafka | [2024-04-26 08:21:37,092] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.DataPlaneAcceptor) 08:24:02 policy-db-migrator | -------------- 08:24:02 kafka | [2024-04-26 08:21:37,092] INFO [Controller id=1] List of topics ineligible for deletion: (kafka.controller.KafkaController) 08:24:02 policy-pap | [2024-04-26T08:22:01.995+00:00|INFO|DefaultSecurityFilterChain|main] Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@cd93621, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@3b1137b0, org.springframework.security.web.context.SecurityContextHolderFilter@20f99c18, org.springframework.security.web.header.HeaderWriterFilter@28269c65, org.springframework.security.web.authentication.logout.LogoutFilter@5ffdd510, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@1870b9b8, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@76e2a621, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@2e7517aa, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@21ba0d33, org.springframework.security.web.access.ExceptionTranslationFilter@20518250, org.springframework.security.web.access.intercept.AuthorizationFilter@912747d] 08:24:02 kafka | [2024-04-26 08:21:37,093] INFO [Controller id=1] Initializing topic deletion manager (kafka.controller.KafkaController) 08:24:02 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_nodeTypesName ON toscaservicetemplate(nodeTypesName, nodeTypesVersion) 08:24:02 kafka | [2024-04-26 08:21:37,094] INFO [Topic Deletion Manager 1] Initializing manager with initial deletions: Set(), initial ineligible deletions: HashSet() (kafka.controller.TopicDeletionManager) 08:24:02 policy-pap | [2024-04-26T08:22:02.775+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoint(s) beneath base path '' 08:24:02 policy-db-migrator | -------------- 08:24:02 kafka | [2024-04-26 08:21:37,097] INFO Awaiting socket connections on 0.0.0.0:29092. (kafka.network.DataPlaneAcceptor) 08:24:02 policy-db-migrator | 08:24:02 policy-pap | [2024-04-26T08:22:02.866+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] 08:24:02 kafka | [2024-04-26 08:21:37,097] INFO [Controller id=1] Sending update metadata request (kafka.controller.KafkaController) 08:24:02 policy-pap | [2024-04-26T08:22:02.899+00:00|INFO|TomcatWebServer|main] Tomcat started on port(s): 6969 (http) with context path '/policy/pap/v1' 08:24:02 policy-db-migrator | 08:24:02 policy-pap | [2024-04-26T08:22:02.915+00:00|INFO|ServiceManager|main] Policy PAP starting 08:24:02 kafka | [2024-04-26 08:21:37,100] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 0 partitions (state.change.logger) 08:24:02 policy-pap | [2024-04-26T08:22:02.916+00:00|INFO|ServiceManager|main] Policy PAP starting Meter Registry 08:24:02 policy-db-migrator | > upgrade 0890-FK_ToscaServiceTemplate_policyTypesName.sql 08:24:02 policy-pap | [2024-04-26T08:22:02.916+00:00|INFO|ServiceManager|main] Policy PAP starting PAP parameters 08:24:02 kafka | [2024-04-26 08:21:37,110] INFO Kafka version: 7.6.1-ccs (org.apache.kafka.common.utils.AppInfoParser) 08:24:02 policy-db-migrator | -------------- 08:24:02 policy-pap | [2024-04-26T08:22:02.917+00:00|INFO|ServiceManager|main] Policy PAP starting Pdp Heartbeat Listener 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.522560104Z level=info msg="Migration successfully executed" id="alter user_auth.auth_id to length 190" duration=274.494µs 08:24:02 kafka | [2024-04-26 08:21:37,110] INFO Kafka commitId: 11e81ad2a49db00b1d2b8c731409cd09e563de67 (org.apache.kafka.common.utils.AppInfoParser) 08:24:02 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_policyTypesName ON toscaservicetemplate(policyTypesName, policyTypesVersion) 08:24:02 policy-pap | [2024-04-26T08:22:02.917+00:00|INFO|ServiceManager|main] Policy PAP starting Response Request ID Dispatcher 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.526012902Z level=info msg="Executing migration" id="Add OAuth access token to user_auth" 08:24:02 kafka | [2024-04-26 08:21:37,110] INFO Kafka startTimeMs: 1714119697103 (org.apache.kafka.common.utils.AppInfoParser) 08:24:02 policy-db-migrator | -------------- 08:24:02 policy-pap | [2024-04-26T08:22:02.918+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Request ID Dispatcher 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.531320307Z level=info msg="Migration successfully executed" id="Add OAuth access token to user_auth" duration=5.308285ms 08:24:02 kafka | [2024-04-26 08:21:37,112] INFO [KafkaServer id=1] started (kafka.server.KafkaServer) 08:24:02 policy-db-migrator | 08:24:02 policy-pap | [2024-04-26T08:22:02.918+00:00|INFO|ServiceManager|main] Policy PAP starting Response Message Dispatcher 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.536871443Z level=info msg="Executing migration" id="Add OAuth refresh token to user_auth" 08:24:02 kafka | [2024-04-26 08:21:37,116] INFO [ReplicaStateMachine controllerId=1] Initializing replica state (kafka.controller.ZkReplicaStateMachine) 08:24:02 policy-db-migrator | 08:24:02 policy-pap | [2024-04-26T08:22:02.920+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=db954cd2-8764-4a44-90af-3bb7f2069f83, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@4271b748 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.540622276Z level=info msg="Migration successfully executed" id="Add OAuth refresh token to user_auth" duration=3.748983ms 08:24:02 kafka | [2024-04-26 08:21:37,117] INFO [ReplicaStateMachine controllerId=1] Triggering online replica state changes (kafka.controller.ZkReplicaStateMachine) 08:24:02 policy-db-migrator | > upgrade 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql 08:24:02 policy-pap | [2024-04-26T08:22:02.933+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=db954cd2-8764-4a44-90af-3bb7f2069f83, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.545674107Z level=info msg="Executing migration" id="Add OAuth token type to user_auth" 08:24:02 kafka | [2024-04-26 08:21:37,130] INFO [RequestSendThread controllerId=1] Controller 1 connected to kafka:9092 (id: 1 rack: null) for sending state change requests (kafka.controller.RequestSendThread) 08:24:02 policy-db-migrator | -------------- 08:24:02 policy-pap | [2024-04-26T08:22:02.934+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 08:24:02 policy-pap | allow.auto.create.topics = true 08:24:02 kafka | [2024-04-26 08:21:37,134] INFO [ReplicaStateMachine controllerId=1] Triggering offline replica state changes (kafka.controller.ZkReplicaStateMachine) 08:24:02 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_relationshipTypesName ON toscaservicetemplate(relationshipTypesName, relationshipTypesVersion) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.549254892Z level=info msg="Migration successfully executed" id="Add OAuth token type to user_auth" duration=3.580485ms 08:24:02 policy-pap | auto.commit.interval.ms = 5000 08:24:02 kafka | [2024-04-26 08:21:37,135] DEBUG [ReplicaStateMachine controllerId=1] Started replica state machine with initial state -> HashMap() (kafka.controller.ZkReplicaStateMachine) 08:24:02 policy-db-migrator | -------------- 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.552213305Z level=info msg="Executing migration" id="Add OAuth expiry to user_auth" 08:24:02 policy-pap | auto.include.jmx.reporter = true 08:24:02 kafka | [2024-04-26 08:21:37,136] INFO [PartitionStateMachine controllerId=1] Initializing partition state (kafka.controller.ZkPartitionStateMachine) 08:24:02 policy-db-migrator | 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.555984549Z level=info msg="Migration successfully executed" id="Add OAuth expiry to user_auth" duration=3.770454ms 08:24:02 policy-pap | auto.offset.reset = latest 08:24:02 kafka | [2024-04-26 08:21:37,137] INFO [PartitionStateMachine controllerId=1] Triggering online partition state changes (kafka.controller.ZkPartitionStateMachine) 08:24:02 policy-db-migrator | 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.559055008Z level=info msg="Executing migration" id="Add index to user_id column in user_auth" 08:24:02 policy-pap | bootstrap.servers = [kafka:9092] 08:24:02 kafka | [2024-04-26 08:21:37,145] DEBUG [PartitionStateMachine controllerId=1] Started partition state machine with initial state -> HashMap() (kafka.controller.ZkPartitionStateMachine) 08:24:02 policy-db-migrator | > upgrade 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql 08:24:02 policy-pap | check.crcs = true 08:24:02 policy-db-migrator | -------------- 08:24:02 kafka | [2024-04-26 08:21:37,145] INFO [Controller id=1] Ready to serve as the new controller with epoch 1 (kafka.controller.KafkaController) 08:24:02 policy-pap | client.dns.lookup = use_all_dns_ips 08:24:02 kafka | [2024-04-26 08:21:37,155] INFO [Controller id=1] Partitions undergoing preferred replica election: (kafka.controller.KafkaController) 08:24:02 policy-db-migrator | CREATE INDEX FK_ToscaTopologyTemplate_nodeTemplatesName ON toscatopologytemplate(nodeTemplatesName, nodeTemplatessVersion) 08:24:02 policy-pap | client.id = consumer-db954cd2-8764-4a44-90af-3bb7f2069f83-3 08:24:02 policy-db-migrator | -------------- 08:24:02 policy-pap | client.rack = 08:24:02 policy-db-migrator | 08:24:02 policy-pap | connections.max.idle.ms = 540000 08:24:02 policy-db-migrator | 08:24:02 policy-pap | default.api.timeout.ms = 60000 08:24:02 policy-db-migrator | > upgrade 0920-FK_ToscaTopologyTemplate_policyName.sql 08:24:02 kafka | [2024-04-26 08:21:37,155] INFO [Controller id=1] Partitions that completed preferred replica election: (kafka.controller.KafkaController) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.559957255Z level=info msg="Migration successfully executed" id="Add index to user_id column in user_auth" duration=897.437µs 08:24:02 policy-pap | enable.auto.commit = true 08:24:02 kafka | [2024-04-26 08:21:37,156] INFO [Controller id=1] Skipping preferred replica election for partitions due to topic deletion: (kafka.controller.KafkaController) 08:24:02 policy-pap | exclude.internal.topics = true 08:24:02 policy-db-migrator | -------------- 08:24:02 kafka | [2024-04-26 08:21:37,157] INFO [Controller id=1] Resuming preferred replica election for partitions: (kafka.controller.KafkaController) 08:24:02 policy-db-migrator | CREATE INDEX FK_ToscaTopologyTemplate_policyName ON toscatopologytemplate(policyName, policyVersion) 08:24:02 policy-pap | fetch.max.bytes = 52428800 08:24:02 kafka | [2024-04-26 08:21:37,159] INFO [Controller id=1] Starting replica leader election (PREFERRED) for partitions triggered by ZkTriggered (kafka.controller.KafkaController) 08:24:02 policy-pap | fetch.max.wait.ms = 500 08:24:02 policy-db-migrator | -------------- 08:24:02 kafka | [2024-04-26 08:21:37,176] INFO [Controller id=1] Starting the controller scheduler (kafka.controller.KafkaController) 08:24:02 policy-db-migrator | 08:24:02 policy-pap | fetch.min.bytes = 1 08:24:02 kafka | [2024-04-26 08:21:37,207] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 0 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 08:24:02 policy-pap | group.id = db954cd2-8764-4a44-90af-3bb7f2069f83 08:24:02 kafka | [2024-04-26 08:21:37,255] INFO [zk-broker-1-to-controller-alter-partition-channel-manager]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) 08:24:02 policy-db-migrator | 08:24:02 policy-pap | group.instance.id = null 08:24:02 kafka | [2024-04-26 08:21:37,262] INFO [zk-broker-1-to-controller-forwarding-channel-manager]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) 08:24:02 policy-pap | heartbeat.interval.ms = 3000 08:24:02 policy-pap | interceptor.classes = [] 08:24:02 policy-db-migrator | > upgrade 0940-PdpPolicyStatus_PdpGroup.sql 08:24:02 kafka | [2024-04-26 08:21:42,178] INFO [Controller id=1] Processing automatic preferred replica leader election (kafka.controller.KafkaController) 08:24:02 policy-pap | internal.leave.group.on.close = true 08:24:02 kafka | [2024-04-26 08:21:42,179] TRACE [Controller id=1] Checking need to trigger auto leader balancing (kafka.controller.KafkaController) 08:24:02 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 08:24:02 policy-db-migrator | -------------- 08:24:02 kafka | [2024-04-26 08:22:03,412] DEBUG [Controller id=1] There is no producerId block yet (Zk path version 0), creating the first block (kafka.controller.KafkaController) 08:24:02 policy-pap | isolation.level = read_uncommitted 08:24:02 kafka | [2024-04-26 08:22:03,418] INFO Creating topic __consumer_offsets with configuration {compression.type=producer, cleanup.policy=compact, segment.bytes=104857600} and initial partition assignment HashMap(0 -> ArrayBuffer(1), 1 -> ArrayBuffer(1), 2 -> ArrayBuffer(1), 3 -> ArrayBuffer(1), 4 -> ArrayBuffer(1), 5 -> ArrayBuffer(1), 6 -> ArrayBuffer(1), 7 -> ArrayBuffer(1), 8 -> ArrayBuffer(1), 9 -> ArrayBuffer(1), 10 -> ArrayBuffer(1), 11 -> ArrayBuffer(1), 12 -> ArrayBuffer(1), 13 -> ArrayBuffer(1), 14 -> ArrayBuffer(1), 15 -> ArrayBuffer(1), 16 -> ArrayBuffer(1), 17 -> ArrayBuffer(1), 18 -> ArrayBuffer(1), 19 -> ArrayBuffer(1), 20 -> ArrayBuffer(1), 21 -> ArrayBuffer(1), 22 -> ArrayBuffer(1), 23 -> ArrayBuffer(1), 24 -> ArrayBuffer(1), 25 -> ArrayBuffer(1), 26 -> ArrayBuffer(1), 27 -> ArrayBuffer(1), 28 -> ArrayBuffer(1), 29 -> ArrayBuffer(1), 30 -> ArrayBuffer(1), 31 -> ArrayBuffer(1), 32 -> ArrayBuffer(1), 33 -> ArrayBuffer(1), 34 -> ArrayBuffer(1), 35 -> ArrayBuffer(1), 36 -> ArrayBuffer(1), 37 -> ArrayBuffer(1), 38 -> ArrayBuffer(1), 39 -> ArrayBuffer(1), 40 -> ArrayBuffer(1), 41 -> ArrayBuffer(1), 42 -> ArrayBuffer(1), 43 -> ArrayBuffer(1), 44 -> ArrayBuffer(1), 45 -> ArrayBuffer(1), 46 -> ArrayBuffer(1), 47 -> ArrayBuffer(1), 48 -> ArrayBuffer(1), 49 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) 08:24:02 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 08:24:02 policy-db-migrator | CREATE INDEX PdpPolicyStatus_PdpGroup ON pdppolicystatus(PDPGROUP) 08:24:02 kafka | [2024-04-26 08:22:03,418] INFO Creating topic policy-pdp-pap with configuration {} and initial partition assignment HashMap(0 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) 08:24:02 policy-pap | max.partition.fetch.bytes = 1048576 08:24:02 kafka | [2024-04-26 08:22:03,441] INFO [Controller id=1] Acquired new producerId block ProducerIdsBlock(assignedBrokerId=1, firstProducerId=0, size=1000) by writing to Zk with path version 1 (kafka.controller.KafkaController) 08:24:02 policy-db-migrator | -------------- 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.565318561Z level=info msg="Executing migration" id="Add OAuth ID token to user_auth" 08:24:02 policy-pap | max.poll.interval.ms = 300000 08:24:02 kafka | [2024-04-26 08:22:03,465] INFO [Controller id=1] New topics: [Set(policy-pdp-pap)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(policy-pdp-pap,Some(JNNo8CVWSdWgRv4ouhjw3w),Map(policy-pdp-pap-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) 08:24:02 policy-db-migrator | 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.570368221Z level=info msg="Migration successfully executed" id="Add OAuth ID token to user_auth" duration=5.04583ms 08:24:02 policy-pap | max.poll.records = 500 08:24:02 kafka | [2024-04-26 08:22:03,466] INFO [Controller id=1] New partition creation callback for policy-pdp-pap-0 (kafka.controller.KafkaController) 08:24:02 policy-db-migrator | 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.574328746Z level=info msg="Executing migration" id="create server_lock table" 08:24:02 policy-pap | metadata.max.age.ms = 300000 08:24:02 kafka | [2024-04-26 08:22:03,469] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 08:24:02 policy-db-migrator | > upgrade 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.575161709Z level=info msg="Migration successfully executed" id="create server_lock table" duration=833.593µs 08:24:02 policy-pap | metric.reporters = [] 08:24:02 kafka | [2024-04-26 08:22:03,469] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 08:24:02 policy-db-migrator | -------------- 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.578030987Z level=info msg="Executing migration" id="add index server_lock.operation_uid" 08:24:02 policy-pap | metrics.num.samples = 2 08:24:02 kafka | [2024-04-26 08:22:03,474] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NonExistentReplica to NewReplica (state.change.logger) 08:24:02 policy-db-migrator | CREATE INDEX TscaServiceTemplatetopologyTemplateParentLocalName ON toscaservicetemplate(topologyTemplateParentLocalName, topologyTemplateLocalName, topologyTemplateParentKeyVersion, topologyTemplateParentKeyName) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.578711622Z level=info msg="Migration successfully executed" id="add index server_lock.operation_uid" duration=680.645µs 08:24:02 policy-pap | metrics.recording.level = INFO 08:24:02 kafka | [2024-04-26 08:22:03,474] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 08:24:02 policy-db-migrator | -------------- 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.586927656Z level=info msg="Executing migration" id="create user auth token table" 08:24:02 policy-pap | metrics.sample.window.ms = 30000 08:24:02 kafka | [2024-04-26 08:22:03,509] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 08:24:02 policy-db-migrator | 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.588099197Z level=info msg="Migration successfully executed" id="create user auth token table" duration=1.176491ms 08:24:02 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 08:24:02 kafka | [2024-04-26 08:22:03,512] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition policy-pdp-pap-0 (state.change.logger) 08:24:02 policy-db-migrator | 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.593433822Z level=info msg="Executing migration" id="add unique index user_auth_token.auth_token" 08:24:02 policy-pap | receive.buffer.bytes = 65536 08:24:02 kafka | [2024-04-26 08:22:03,513] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 1 become-leader and 0 become-follower partitions (state.change.logger) 08:24:02 policy-db-migrator | > upgrade 0960-FK_ToscaNodeTemplate_capabilitiesName.sql 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.594532719Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.auth_token" duration=1.099517ms 08:24:02 policy-pap | reconnect.backoff.max.ms = 1000 08:24:02 kafka | [2024-04-26 08:22:03,516] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 1 partitions (state.change.logger) 08:24:02 policy-db-migrator | -------------- 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.599211361Z level=info msg="Executing migration" id="add unique index user_auth_token.prev_auth_token" 08:24:02 policy-pap | reconnect.backoff.ms = 50 08:24:02 kafka | [2024-04-26 08:22:03,517] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NewReplica to OnlineReplica (state.change.logger) 08:24:02 policy-db-migrator | ALTER TABLE toscanodetemplate ADD CONSTRAINT FK_ToscaNodeTemplate_capabilitiesName FOREIGN KEY (capabilitiesName, capabilitiesVersion) REFERENCES toscacapabilityassignments (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.600785802Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.prev_auth_token" duration=1.577521ms 08:24:02 policy-pap | request.timeout.ms = 30000 08:24:02 kafka | [2024-04-26 08:22:03,517] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 08:24:02 policy-db-migrator | -------------- 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.611710476Z level=info msg="Executing migration" id="add index user_auth_token.user_id" 08:24:02 policy-pap | retry.backoff.ms = 100 08:24:02 kafka | [2024-04-26 08:22:03,520] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 for 1 partitions (state.change.logger) 08:24:02 policy-db-migrator | 08:24:02 policy-pap | sasl.client.callback.handler.class = null 08:24:02 policy-db-migrator | 08:24:02 kafka | [2024-04-26 08:22:03,521] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 08:24:02 policy-pap | sasl.jaas.config = null 08:24:02 policy-db-migrator | > upgrade 0970-FK_ToscaNodeTemplate_requirementsName.sql 08:24:02 kafka | [2024-04-26 08:22:03,526] INFO [Controller id=1] New topics: [Set(__consumer_offsets)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(__consumer_offsets,Some(RfiyP89qRi-5ZTNhftzAtg),HashMap(__consumer_offsets-22 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-30 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-25 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-35 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-37 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-38 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-13 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-8 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-21 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-4 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-27 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-7 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-9 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-46 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-41 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-33 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-23 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-49 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-47 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-16 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-28 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-31 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-36 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-42 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-3 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-18 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-15 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-24 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-17 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-48 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-19 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-11 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-2 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-43 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-6 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-14 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-20 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-44 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-39 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-12 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-45 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-1 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-5 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-26 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-29 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-34 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-10 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-32 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-40 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) 08:24:02 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 08:24:02 policy-db-migrator | -------------- 08:24:02 kafka | [2024-04-26 08:22:03,526] INFO [Controller id=1] New partition creation callback for __consumer_offsets-22,__consumer_offsets-30,__consumer_offsets-25,__consumer_offsets-35,__consumer_offsets-37,__consumer_offsets-38,__consumer_offsets-13,__consumer_offsets-8,__consumer_offsets-21,__consumer_offsets-4,__consumer_offsets-27,__consumer_offsets-7,__consumer_offsets-9,__consumer_offsets-46,__consumer_offsets-41,__consumer_offsets-33,__consumer_offsets-23,__consumer_offsets-49,__consumer_offsets-47,__consumer_offsets-16,__consumer_offsets-28,__consumer_offsets-31,__consumer_offsets-36,__consumer_offsets-42,__consumer_offsets-3,__consumer_offsets-18,__consumer_offsets-15,__consumer_offsets-24,__consumer_offsets-17,__consumer_offsets-48,__consumer_offsets-19,__consumer_offsets-11,__consumer_offsets-2,__consumer_offsets-43,__consumer_offsets-6,__consumer_offsets-14,__consumer_offsets-20,__consumer_offsets-0,__consumer_offsets-44,__consumer_offsets-39,__consumer_offsets-12,__consumer_offsets-45,__consumer_offsets-1,__consumer_offsets-5,__consumer_offsets-26,__consumer_offsets-29,__consumer_offsets-34,__consumer_offsets-10,__consumer_offsets-32,__consumer_offsets-40 (kafka.controller.KafkaController) 08:24:02 policy-db-migrator | ALTER TABLE toscanodetemplate ADD CONSTRAINT FK_ToscaNodeTemplate_requirementsName FOREIGN KEY (requirementsName, requirementsVersion) REFERENCES toscarequirements (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 08:24:02 kafka | [2024-04-26 08:22:03,527] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 08:24:02 policy-db-migrator | -------------- 08:24:02 kafka | [2024-04-26 08:22:03,527] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 08:24:02 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 08:24:02 policy-db-migrator | 08:24:02 kafka | [2024-04-26 08:22:03,527] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 08:24:02 policy-pap | sasl.kerberos.service.name = null 08:24:02 kafka | [2024-04-26 08:22:03,527] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 08:24:02 policy-db-migrator | 08:24:02 kafka | [2024-04-26 08:22:03,527] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.612746709Z level=info msg="Migration successfully executed" id="add index user_auth_token.user_id" duration=1.027492ms 08:24:02 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 08:24:02 policy-db-migrator | > upgrade 0980-FK_ToscaNodeType_requirementsName.sql 08:24:02 kafka | [2024-04-26 08:22:03,528] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 08:24:02 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.617188928Z level=info msg="Executing migration" id="Add revoked_at to the user auth token" 08:24:02 policy-db-migrator | -------------- 08:24:02 kafka | [2024-04-26 08:22:03,528] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 08:24:02 policy-pap | sasl.login.callback.handler.class = null 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.622728724Z level=info msg="Migration successfully executed" id="Add revoked_at to the user auth token" duration=5.536436ms 08:24:02 policy-db-migrator | ALTER TABLE toscanodetype ADD CONSTRAINT FK_ToscaNodeType_requirementsName FOREIGN KEY (requirementsName, requirementsVersion) REFERENCES toscarequirements (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 08:24:02 kafka | [2024-04-26 08:22:03,528] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 08:24:02 policy-pap | sasl.login.class = null 08:24:02 policy-db-migrator | -------------- 08:24:02 policy-pap | sasl.login.connect.timeout.ms = null 08:24:02 kafka | [2024-04-26 08:22:03,528] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 08:24:02 policy-pap | sasl.login.read.timeout.ms = null 08:24:02 kafka | [2024-04-26 08:22:03,528] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 08:24:02 policy-pap | sasl.login.refresh.buffer.seconds = 300 08:24:02 policy-db-migrator | 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.626032695Z level=info msg="Executing migration" id="add index user_auth_token.revoked_at" 08:24:02 policy-pap | sasl.login.refresh.min.period.seconds = 60 08:24:02 kafka | [2024-04-26 08:22:03,528] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 08:24:02 policy-db-migrator | 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.626975133Z level=info msg="Migration successfully executed" id="add index user_auth_token.revoked_at" duration=945.588µs 08:24:02 policy-pap | sasl.login.refresh.window.factor = 0.8 08:24:02 kafka | [2024-04-26 08:22:03,528] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 08:24:02 policy-db-migrator | > upgrade 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.631514748Z level=info msg="Executing migration" id="create cache_data table" 08:24:02 policy-pap | sasl.login.refresh.window.jitter = 0.05 08:24:02 kafka | [2024-04-26 08:22:03,528] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 08:24:02 policy-db-migrator | -------------- 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.632405104Z level=info msg="Migration successfully executed" id="create cache_data table" duration=890.565µs 08:24:02 policy-pap | sasl.login.retry.backoff.max.ms = 10000 08:24:02 kafka | [2024-04-26 08:22:03,529] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 08:24:02 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_capabilityTypesName FOREIGN KEY (capabilityTypesName, capabilityTypesVersion) REFERENCES toscacapabilitytypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 08:24:02 policy-pap | sasl.login.retry.backoff.ms = 100 08:24:02 kafka | [2024-04-26 08:22:03,529] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 08:24:02 policy-db-migrator | -------------- 08:24:02 policy-pap | sasl.mechanism = GSSAPI 08:24:02 policy-db-migrator | 08:24:02 kafka | [2024-04-26 08:22:03,529] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 08:24:02 kafka | [2024-04-26 08:22:03,529] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 08:24:02 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 08:24:02 policy-db-migrator | 08:24:02 kafka | [2024-04-26 08:22:03,529] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 08:24:02 policy-pap | sasl.oauthbearer.expected.audience = null 08:24:02 policy-db-migrator | > upgrade 1000-FK_ToscaServiceTemplate_dataTypesName.sql 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.636063082Z level=info msg="Executing migration" id="add unique index cache_data.cache_key" 08:24:02 kafka | [2024-04-26 08:22:03,529] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 08:24:02 policy-pap | sasl.oauthbearer.expected.issuer = null 08:24:02 policy-db-migrator | -------------- 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.637120497Z level=info msg="Migration successfully executed" id="add unique index cache_data.cache_key" duration=1.050145ms 08:24:02 kafka | [2024-04-26 08:22:03,529] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 08:24:02 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 08:24:02 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_dataTypesName FOREIGN KEY (dataTypesName, dataTypesVersion) REFERENCES toscadatatypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.646127612Z level=info msg="Executing migration" id="create short_url table v1" 08:24:02 kafka | [2024-04-26 08:22:03,529] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 08:24:02 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 08:24:02 policy-db-migrator | -------------- 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.647732965Z level=info msg="Migration successfully executed" id="create short_url table v1" duration=1.605573ms 08:24:02 kafka | [2024-04-26 08:22:03,530] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 08:24:02 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 08:24:02 policy-db-migrator | 08:24:02 kafka | [2024-04-26 08:22:03,530] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 08:24:02 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 08:24:02 policy-db-migrator | 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.653166375Z level=info msg="Executing migration" id="add index short_url.org_id-uid" 08:24:02 kafka | [2024-04-26 08:22:03,530] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 08:24:02 policy-pap | sasl.oauthbearer.scope.claim.name = scope 08:24:02 policy-db-migrator | > upgrade 1010-FK_ToscaServiceTemplate_nodeTypesName.sql 08:24:02 kafka | [2024-04-26 08:22:03,530] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 08:24:02 policy-db-migrator | -------------- 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.654179507Z level=info msg="Migration successfully executed" id="add index short_url.org_id-uid" duration=1.013592ms 08:24:02 policy-pap | sasl.oauthbearer.sub.claim.name = sub 08:24:02 kafka | [2024-04-26 08:22:03,530] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 08:24:02 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_nodeTypesName FOREIGN KEY (nodeTypesName, nodeTypesVersion) REFERENCES toscanodetypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.659408638Z level=info msg="Executing migration" id="alter table short_url alter column created_by type to bigint" 08:24:02 kafka | [2024-04-26 08:22:03,530] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.659476031Z level=info msg="Migration successfully executed" id="alter table short_url alter column created_by type to bigint" duration=67.993µs 08:24:02 policy-pap | sasl.oauthbearer.token.endpoint.url = null 08:24:02 policy-db-migrator | -------------- 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.696604157Z level=info msg="Executing migration" id="delete alert_definition table" 08:24:02 kafka | [2024-04-26 08:22:03,530] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 08:24:02 policy-pap | security.protocol = PLAINTEXT 08:24:02 policy-db-migrator | 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.696938534Z level=info msg="Migration successfully executed" id="delete alert_definition table" duration=325.916µs 08:24:02 policy-pap | security.providers = null 08:24:02 policy-db-migrator | 08:24:02 kafka | [2024-04-26 08:22:03,531] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.704048721Z level=info msg="Executing migration" id="recreate alert_definition table" 08:24:02 policy-pap | send.buffer.bytes = 131072 08:24:02 policy-db-migrator | > upgrade 1020-FK_ToscaServiceTemplate_policyTypesName.sql 08:24:02 kafka | [2024-04-26 08:22:03,531] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.705165619Z level=info msg="Migration successfully executed" id="recreate alert_definition table" duration=1.115828ms 08:24:02 policy-db-migrator | -------------- 08:24:02 kafka | [2024-04-26 08:22:03,531] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.712648686Z level=info msg="Executing migration" id="add index in alert_definition on org_id and title columns" 08:24:02 policy-pap | session.timeout.ms = 45000 08:24:02 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_policyTypesName FOREIGN KEY (policyTypesName, policyTypesVersion) REFERENCES toscapolicytypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 08:24:02 kafka | [2024-04-26 08:22:03,531] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.71370408Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and title columns" duration=1.055554ms 08:24:02 policy-pap | socket.connection.setup.timeout.max.ms = 30000 08:24:02 policy-db-migrator | -------------- 08:24:02 kafka | [2024-04-26 08:22:03,531] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.717906976Z level=info msg="Executing migration" id="add index in alert_definition on org_id and uid columns" 08:24:02 policy-pap | socket.connection.setup.timeout.ms = 10000 08:24:02 policy-db-migrator | 08:24:02 kafka | [2024-04-26 08:22:03,531] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.71892517Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and uid columns" duration=1.017743ms 08:24:02 policy-pap | ssl.cipher.suites = null 08:24:02 policy-db-migrator | 08:24:02 kafka | [2024-04-26 08:22:03,531] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.722493774Z level=info msg="Executing migration" id="alter alert_definition table data column to mediumtext in mysql" 08:24:02 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 08:24:02 kafka | [2024-04-26 08:22:03,531] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 08:24:02 policy-db-migrator | > upgrade 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.722559927Z level=info msg="Migration successfully executed" id="alter alert_definition table data column to mediumtext in mysql" duration=66.533µs 08:24:02 policy-pap | ssl.endpoint.identification.algorithm = https 08:24:02 policy-db-migrator | -------------- 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.72590635Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and title columns" 08:24:02 kafka | [2024-04-26 08:22:03,532] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 08:24:02 policy-pap | ssl.engine.factory.class = null 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.726910541Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and title columns" duration=1.004371ms 08:24:02 kafka | [2024-04-26 08:22:03,532] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 08:24:02 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_relationshipTypesName FOREIGN KEY (relationshipTypesName, relationshipTypesVersion) REFERENCES toscarelationshiptypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 08:24:02 policy-pap | ssl.key.password = null 08:24:02 kafka | [2024-04-26 08:22:03,532] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 08:24:02 policy-db-migrator | -------------- 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.732150192Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and uid columns" 08:24:02 policy-pap | ssl.keymanager.algorithm = SunX509 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.733560075Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and uid columns" duration=1.408593ms 08:24:02 kafka | [2024-04-26 08:22:03,532] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 08:24:02 policy-db-migrator | 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.738134331Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and title columns" 08:24:02 kafka | [2024-04-26 08:22:03,532] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 08:24:02 policy-pap | ssl.keystore.certificate.chain = null 08:24:02 policy-db-migrator | 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.739812198Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and title columns" duration=1.677707ms 08:24:02 kafka | [2024-04-26 08:22:03,532] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 08:24:02 policy-pap | ssl.keystore.key = null 08:24:02 policy-db-migrator | > upgrade 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql 08:24:02 kafka | [2024-04-26 08:22:03,532] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 08:24:02 policy-pap | ssl.keystore.location = null 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.743187691Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and uid columns" 08:24:02 policy-db-migrator | -------------- 08:24:02 kafka | [2024-04-26 08:22:03,533] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 08:24:02 policy-pap | ssl.keystore.password = null 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.744206024Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and uid columns" duration=1.015223ms 08:24:02 kafka | [2024-04-26 08:22:03,533] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.749206802Z level=info msg="Executing migration" id="Add column paused in alert_definition" 08:24:02 policy-db-migrator | ALTER TABLE toscatopologytemplate ADD CONSTRAINT FK_ToscaTopologyTemplate_nodeTemplatesName FOREIGN KEY (nodeTemplatesName, nodeTemplatessVersion) REFERENCES toscanodetemplates (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 08:24:02 policy-pap | ssl.keystore.type = JKS 08:24:02 kafka | [2024-04-26 08:22:03,533] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.759007008Z level=info msg="Migration successfully executed" id="Add column paused in alert_definition" duration=9.796216ms 08:24:02 policy-db-migrator | -------------- 08:24:02 policy-pap | ssl.protocol = TLSv1.3 08:24:02 kafka | [2024-04-26 08:22:03,533] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.762288928Z level=info msg="Executing migration" id="drop alert_definition table" 08:24:02 policy-db-migrator | 08:24:02 kafka | [2024-04-26 08:22:03,533] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 08:24:02 policy-pap | ssl.provider = null 08:24:02 kafka | [2024-04-26 08:22:03,533] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 08:24:02 policy-db-migrator | 08:24:02 policy-pap | ssl.secure.random.implementation = null 08:24:02 kafka | [2024-04-26 08:22:03,533] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 08:24:02 policy-db-migrator | > upgrade 1050-FK_ToscaTopologyTemplate_policyName.sql 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.76485856Z level=info msg="Migration successfully executed" id="drop alert_definition table" duration=2.568462ms 08:24:02 policy-pap | ssl.trustmanager.algorithm = PKIX 08:24:02 policy-db-migrator | -------------- 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.774510238Z level=info msg="Executing migration" id="delete alert_definition_version table" 08:24:02 kafka | [2024-04-26 08:22:03,534] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 08:24:02 policy-pap | ssl.truststore.certificates = null 08:24:02 policy-db-migrator | ALTER TABLE toscatopologytemplate ADD CONSTRAINT FK_ToscaTopologyTemplate_policyName FOREIGN KEY (policyName, policyVersion) REFERENCES toscapolicies (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.774597702Z level=info msg="Migration successfully executed" id="delete alert_definition_version table" duration=88.034µs 08:24:02 kafka | [2024-04-26 08:22:03,537] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NonExistentReplica to NewReplica (state.change.logger) 08:24:02 policy-pap | ssl.truststore.location = null 08:24:02 policy-db-migrator | -------------- 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.781773763Z level=info msg="Executing migration" id="recreate alert_definition_version table" 08:24:02 kafka | [2024-04-26 08:22:03,538] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NonExistentReplica to NewReplica (state.change.logger) 08:24:02 policy-pap | ssl.truststore.password = null 08:24:02 policy-db-migrator | 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.783234409Z level=info msg="Migration successfully executed" id="recreate alert_definition_version table" duration=1.460567ms 08:24:02 kafka | [2024-04-26 08:22:03,538] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NonExistentReplica to NewReplica (state.change.logger) 08:24:02 policy-pap | ssl.truststore.type = JKS 08:24:02 policy-db-migrator | 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.78673755Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_id and version columns" 08:24:02 kafka | [2024-04-26 08:22:03,538] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NonExistentReplica to NewReplica (state.change.logger) 08:24:02 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 08:24:02 policy-db-migrator | > upgrade 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.787806244Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_id and version columns" duration=1.067974ms 08:24:02 kafka | [2024-04-26 08:22:03,538] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NonExistentReplica to NewReplica (state.change.logger) 08:24:02 policy-pap | 08:24:02 policy-db-migrator | -------------- 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.793845086Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_uid and version columns" 08:24:02 kafka | [2024-04-26 08:22:03,538] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NonExistentReplica to NewReplica (state.change.logger) 08:24:02 policy-pap | [2024-04-26T08:22:02.940+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.794848448Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_uid and version columns" duration=1.002892ms 08:24:02 kafka | [2024-04-26 08:22:03,538] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NonExistentReplica to NewReplica (state.change.logger) 08:24:02 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT TscaServiceTemplatetopologyTemplateParentLocalName FOREIGN KEY (topologyTemplateParentLocalName, topologyTemplateLocalName, topologyTemplateParentKeyVersion, topologyTemplateParentKeyName) REFERENCES toscatopologytemplate (parentLocalName, localName, parentKeyVersion, parentKeyName) ON UPDATE RESTRICT ON DELETE RESTRICT 08:24:02 policy-pap | [2024-04-26T08:22:02.940+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.799724389Z level=info msg="Executing migration" id="alter alert_definition_version table data column to mediumtext in mysql" 08:24:02 kafka | [2024-04-26 08:22:03,538] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NonExistentReplica to NewReplica (state.change.logger) 08:24:02 policy-db-migrator | -------------- 08:24:02 policy-pap | [2024-04-26T08:22:02.940+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1714119722940 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.799792133Z level=info msg="Migration successfully executed" id="alter alert_definition_version table data column to mediumtext in mysql" duration=67.854µs 08:24:02 kafka | [2024-04-26 08:22:03,538] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NonExistentReplica to NewReplica (state.change.logger) 08:24:02 policy-db-migrator | 08:24:02 policy-pap | [2024-04-26T08:22:02.940+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-db954cd2-8764-4a44-90af-3bb7f2069f83-3, groupId=db954cd2-8764-4a44-90af-3bb7f2069f83] Subscribed to topic(s): policy-pdp-pap 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.804073044Z level=info msg="Executing migration" id="drop alert_definition_version table" 08:24:02 kafka | [2024-04-26 08:22:03,538] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NonExistentReplica to NewReplica (state.change.logger) 08:24:02 policy-db-migrator | 08:24:02 policy-pap | [2024-04-26T08:22:02.941+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Message Dispatcher 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.805588022Z level=info msg="Migration successfully executed" id="drop alert_definition_version table" duration=1.514188ms 08:24:02 kafka | [2024-04-26 08:22:03,539] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NonExistentReplica to NewReplica (state.change.logger) 08:24:02 policy-db-migrator | > upgrade 0100-pdp.sql 08:24:02 policy-pap | [2024-04-26T08:22:02.941+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=eb01c65d-170a-46d6-9ba7-54033f13f8dc, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@4bc9451b 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.811337719Z level=info msg="Executing migration" id="create alert_instance table" 08:24:02 kafka | [2024-04-26 08:22:03,539] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NonExistentReplica to NewReplica (state.change.logger) 08:24:02 policy-db-migrator | -------------- 08:24:02 policy-pap | [2024-04-26T08:22:02.941+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=eb01c65d-170a-46d6-9ba7-54033f13f8dc, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 08:24:02 kafka | [2024-04-26 08:22:03,539] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NonExistentReplica to NewReplica (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.81233505Z level=info msg="Migration successfully executed" id="create alert_instance table" duration=999.131µs 08:24:02 policy-db-migrator | ALTER TABLE pdp ADD COLUMN LASTUPDATE datetime NOT NULL DEFAULT CURRENT_TIMESTAMP AFTER HEALTHY 08:24:02 policy-pap | [2024-04-26T08:22:02.941+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 08:24:02 kafka | [2024-04-26 08:22:03,539] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NonExistentReplica to NewReplica (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.819920862Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" 08:24:02 policy-db-migrator | -------------- 08:24:02 policy-pap | allow.auto.create.topics = true 08:24:02 kafka | [2024-04-26 08:22:03,539] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NonExistentReplica to NewReplica (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.821589938Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" duration=1.668666ms 08:24:02 policy-db-migrator | 08:24:02 policy-pap | auto.commit.interval.ms = 5000 08:24:02 kafka | [2024-04-26 08:22:03,539] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NonExistentReplica to NewReplica (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.825819386Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, current_state columns" 08:24:02 policy-db-migrator | 08:24:02 policy-pap | auto.include.jmx.reporter = true 08:24:02 kafka | [2024-04-26 08:22:03,539] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NonExistentReplica to NewReplica (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.827413019Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, current_state columns" duration=1.591213ms 08:24:02 policy-db-migrator | > upgrade 0110-idx_tsidx1.sql 08:24:02 policy-pap | auto.offset.reset = latest 08:24:02 kafka | [2024-04-26 08:22:03,539] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NonExistentReplica to NewReplica (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.831677899Z level=info msg="Executing migration" id="add column current_state_end to alert_instance" 08:24:02 policy-db-migrator | -------------- 08:24:02 policy-pap | bootstrap.servers = [kafka:9092] 08:24:02 kafka | [2024-04-26 08:22:03,540] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NonExistentReplica to NewReplica (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.837489919Z level=info msg="Migration successfully executed" id="add column current_state_end to alert_instance" duration=5.81116ms 08:24:02 policy-db-migrator | CREATE INDEX IDX_TSIDX1 ON pdpstatistics(timeStamp, name, version) 08:24:02 policy-pap | check.crcs = true 08:24:02 kafka | [2024-04-26 08:22:03,540] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NonExistentReplica to NewReplica (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.842296447Z level=info msg="Executing migration" id="remove index def_org_id, def_uid, current_state on alert_instance" 08:24:02 policy-db-migrator | -------------- 08:24:02 policy-pap | client.dns.lookup = use_all_dns_ips 08:24:02 kafka | [2024-04-26 08:22:03,540] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NonExistentReplica to NewReplica (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.843350931Z level=info msg="Migration successfully executed" id="remove index def_org_id, def_uid, current_state on alert_instance" duration=1.056234ms 08:24:02 policy-pap | client.id = consumer-policy-pap-4 08:24:02 kafka | [2024-04-26 08:22:03,540] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NonExistentReplica to NewReplica (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.848717518Z level=info msg="Executing migration" id="remove index def_org_id, current_state on alert_instance" 08:24:02 policy-db-migrator | 08:24:02 policy-pap | client.rack = 08:24:02 kafka | [2024-04-26 08:22:03,540] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NonExistentReplica to NewReplica (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.849467437Z level=info msg="Migration successfully executed" id="remove index def_org_id, current_state on alert_instance" duration=750.709µs 08:24:02 policy-db-migrator | 08:24:02 policy-pap | connections.max.idle.ms = 540000 08:24:02 kafka | [2024-04-26 08:22:03,540] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NonExistentReplica to NewReplica (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.855600644Z level=info msg="Executing migration" id="rename def_org_id to rule_org_id in alert_instance" 08:24:02 policy-db-migrator | > upgrade 0120-pk_pdpstatistics.sql 08:24:02 policy-pap | default.api.timeout.ms = 60000 08:24:02 kafka | [2024-04-26 08:22:03,540] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NonExistentReplica to NewReplica (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.881153472Z level=info msg="Migration successfully executed" id="rename def_org_id to rule_org_id in alert_instance" duration=25.547489ms 08:24:02 policy-db-migrator | -------------- 08:24:02 policy-pap | enable.auto.commit = true 08:24:02 kafka | [2024-04-26 08:22:03,540] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NonExistentReplica to NewReplica (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.887866559Z level=info msg="Executing migration" id="rename def_uid to rule_uid in alert_instance" 08:24:02 kafka | [2024-04-26 08:22:03,540] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NonExistentReplica to NewReplica (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.909353758Z level=info msg="Migration successfully executed" id="rename def_uid to rule_uid in alert_instance" duration=21.4897ms 08:24:02 policy-db-migrator | ALTER TABLE pdpstatistics DROP PRIMARY KEY 08:24:02 policy-pap | exclude.internal.topics = true 08:24:02 kafka | [2024-04-26 08:22:03,541] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NonExistentReplica to NewReplica (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.913842989Z level=info msg="Executing migration" id="add index rule_org_id, rule_uid, current_state on alert_instance" 08:24:02 policy-db-migrator | -------------- 08:24:02 policy-pap | fetch.max.bytes = 52428800 08:24:02 kafka | [2024-04-26 08:22:03,541] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NonExistentReplica to NewReplica (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.914598188Z level=info msg="Migration successfully executed" id="add index rule_org_id, rule_uid, current_state on alert_instance" duration=755.629µs 08:24:02 policy-db-migrator | 08:24:02 policy-pap | fetch.max.wait.ms = 500 08:24:02 kafka | [2024-04-26 08:22:03,541] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NonExistentReplica to NewReplica (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.922649424Z level=info msg="Executing migration" id="add index rule_org_id, current_state on alert_instance" 08:24:02 policy-db-migrator | 08:24:02 policy-pap | fetch.min.bytes = 1 08:24:02 kafka | [2024-04-26 08:22:03,541] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NonExistentReplica to NewReplica (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.923755111Z level=info msg="Migration successfully executed" id="add index rule_org_id, current_state on alert_instance" duration=1.106217ms 08:24:02 policy-db-migrator | > upgrade 0130-pdpstatistics.sql 08:24:02 policy-pap | group.id = policy-pap 08:24:02 kafka | [2024-04-26 08:22:03,541] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NonExistentReplica to NewReplica (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.930697889Z level=info msg="Executing migration" id="add current_reason column related to current_state" 08:24:02 policy-db-migrator | -------------- 08:24:02 policy-pap | group.instance.id = null 08:24:02 kafka | [2024-04-26 08:22:03,541] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NonExistentReplica to NewReplica (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.935281186Z level=info msg="Migration successfully executed" id="add current_reason column related to current_state" duration=4.585707ms 08:24:02 policy-db-migrator | ALTER TABLE pdpstatistics ADD COLUMN POLICYUNDEPLOYCOUNT BIGINT DEFAULT NULL AFTER POLICYEXECUTEDSUCCESSCOUNT, ADD COLUMN POLICYUNDEPLOYFAILCOUNT BIGINT DEFAULT NULL, ADD COLUMN POLICYUNDEPLOYSUCCESSCOUNT BIGINT DEFAULT NULL, ADD COLUMN ID BIGINT NOT NULL 08:24:02 policy-pap | heartbeat.interval.ms = 3000 08:24:02 kafka | [2024-04-26 08:22:03,541] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NonExistentReplica to NewReplica (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.940044352Z level=info msg="Executing migration" id="add result_fingerprint column to alert_instance" 08:24:02 policy-db-migrator | -------------- 08:24:02 policy-pap | interceptor.classes = [] 08:24:02 kafka | [2024-04-26 08:22:03,541] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NonExistentReplica to NewReplica (state.change.logger) 08:24:02 policy-db-migrator | 08:24:02 policy-pap | internal.leave.group.on.close = true 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.945402238Z level=info msg="Migration successfully executed" id="add result_fingerprint column to alert_instance" duration=5.358097ms 08:24:02 kafka | [2024-04-26 08:22:03,541] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NonExistentReplica to NewReplica (state.change.logger) 08:24:02 policy-db-migrator | 08:24:02 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.951512374Z level=info msg="Executing migration" id="create alert_rule table" 08:24:02 kafka | [2024-04-26 08:22:03,542] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NonExistentReplica to NewReplica (state.change.logger) 08:24:02 policy-db-migrator | > upgrade 0140-pk_pdpstatistics.sql 08:24:02 policy-pap | isolation.level = read_uncommitted 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.952489544Z level=info msg="Migration successfully executed" id="create alert_rule table" duration=976.96µs 08:24:02 kafka | [2024-04-26 08:22:03,542] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NonExistentReplica to NewReplica (state.change.logger) 08:24:02 policy-db-migrator | -------------- 08:24:02 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.962414696Z level=info msg="Executing migration" id="add index in alert_rule on org_id and title columns" 08:24:02 kafka | [2024-04-26 08:22:03,542] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NonExistentReplica to NewReplica (state.change.logger) 08:24:02 policy-db-migrator | UPDATE pdpstatistics as p JOIN (SELECT name, version, timeStamp, ROW_NUMBER() OVER (ORDER BY timeStamp ASC) AS row_num FROM pdpstatistics GROUP BY name, version, timeStamp) AS t ON (p.name=t.name AND p.version=t.version AND p.timeStamp = t.timeStamp) SET p.id=t.row_num 08:24:02 policy-pap | max.partition.fetch.bytes = 1048576 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.963619978Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and title columns" duration=1.205762ms 08:24:02 kafka | [2024-04-26 08:22:03,542] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NonExistentReplica to NewReplica (state.change.logger) 08:24:02 policy-db-migrator | -------------- 08:24:02 policy-pap | max.poll.interval.ms = 300000 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.967731501Z level=info msg="Executing migration" id="add index in alert_rule on org_id and uid columns" 08:24:02 kafka | [2024-04-26 08:22:03,542] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NonExistentReplica to NewReplica (state.change.logger) 08:24:02 policy-db-migrator | 08:24:02 policy-pap | max.poll.records = 500 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.968549883Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and uid columns" duration=821.842µs 08:24:02 policy-db-migrator | -------------- 08:24:02 kafka | [2024-04-26 08:22:03,542] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NonExistentReplica to NewReplica (state.change.logger) 08:24:02 policy-pap | metadata.max.age.ms = 300000 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.972616333Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" 08:24:02 policy-db-migrator | ALTER TABLE pdpstatistics ADD CONSTRAINT PK_PDPSTATISTICS PRIMARY KEY (ID, name, version) 08:24:02 kafka | [2024-04-26 08:22:03,543] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NonExistentReplica to NewReplica (state.change.logger) 08:24:02 policy-pap | metric.reporters = [] 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.973453136Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" duration=834.653µs 08:24:02 policy-db-migrator | -------------- 08:24:02 kafka | [2024-04-26 08:22:03,543] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NonExistentReplica to NewReplica (state.change.logger) 08:24:02 policy-pap | metrics.num.samples = 2 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.977535887Z level=info msg="Executing migration" id="alter alert_rule table data column to mediumtext in mysql" 08:24:02 kafka | [2024-04-26 08:22:03,544] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NonExistentReplica to NewReplica (state.change.logger) 08:24:02 policy-pap | metrics.recording.level = INFO 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.977621291Z level=info msg="Migration successfully executed" id="alter alert_rule table data column to mediumtext in mysql" duration=85.734µs 08:24:02 policy-db-migrator | 08:24:02 kafka | [2024-04-26 08:22:03,544] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NonExistentReplica to NewReplica (state.change.logger) 08:24:02 policy-pap | metrics.sample.window.ms = 30000 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.981438369Z level=info msg="Executing migration" id="add column for to alert_rule" 08:24:02 policy-db-migrator | 08:24:02 kafka | [2024-04-26 08:22:03,544] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NonExistentReplica to NewReplica (state.change.logger) 08:24:02 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.986074747Z level=info msg="Migration successfully executed" id="add column for to alert_rule" duration=4.635898ms 08:24:02 policy-db-migrator | > upgrade 0150-pdpstatistics.sql 08:24:02 kafka | [2024-04-26 08:22:03,544] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NonExistentReplica to NewReplica (state.change.logger) 08:24:02 policy-pap | receive.buffer.bytes = 65536 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.989730336Z level=info msg="Executing migration" id="add column annotations to alert_rule" 08:24:02 policy-db-migrator | -------------- 08:24:02 kafka | [2024-04-26 08:22:03,544] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NonExistentReplica to NewReplica (state.change.logger) 08:24:02 policy-pap | reconnect.backoff.max.ms = 1000 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.994124833Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule" duration=4.396577ms 08:24:02 policy-db-migrator | ALTER TABLE pdpstatistics MODIFY COLUMN timeStamp datetime(6) NULL 08:24:02 kafka | [2024-04-26 08:22:03,544] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NonExistentReplica to NewReplica (state.change.logger) 08:24:02 policy-pap | reconnect.backoff.ms = 50 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:35.997568181Z level=info msg="Executing migration" id="add column labels to alert_rule" 08:24:02 policy-db-migrator | -------------- 08:24:02 kafka | [2024-04-26 08:22:03,544] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 08:24:02 policy-pap | request.timeout.ms = 30000 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.003459575Z level=info msg="Migration successfully executed" id="add column labels to alert_rule" duration=5.890874ms 08:24:02 policy-db-migrator | 08:24:02 kafka | [2024-04-26 08:22:03,563] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) 08:24:02 policy-pap | retry.backoff.ms = 100 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.008464292Z level=info msg="Executing migration" id="remove unique index from alert_rule on org_id, title columns" 08:24:02 policy-db-migrator | 08:24:02 kafka | [2024-04-26 08:22:03,565] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions Set(policy-pdp-pap-0) (kafka.server.ReplicaFetcherManager) 08:24:02 policy-pap | sasl.client.callback.handler.class = null 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.009629188Z level=info msg="Migration successfully executed" id="remove unique index from alert_rule on org_id, title columns" duration=1.165076ms 08:24:02 policy-db-migrator | > upgrade 0160-jpapdpstatistics_enginestats.sql 08:24:02 kafka | [2024-04-26 08:22:03,565] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 1 from controller 1 epoch 1 as part of the become-leader transition for 1 partitions (state.change.logger) 08:24:02 policy-pap | sasl.jaas.config = null 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.016720568Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespase_uid and title columns" 08:24:02 policy-db-migrator | -------------- 08:24:02 kafka | [2024-04-26 08:22:03,653] INFO [LogLoader partition=policy-pdp-pap-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 08:24:02 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.01778652Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespase_uid and title columns" duration=1.069083ms 08:24:02 policy-db-migrator | ALTER TABLE jpapdpstatistics_enginestats ADD COLUMN ID BIGINT DEFAULT NULL AFTER UPTIME 08:24:02 kafka | [2024-04-26 08:22:03,676] INFO Created log for partition policy-pdp-pap-0 in /var/lib/kafka/data/policy-pdp-pap-0 with properties {} (kafka.log.LogManager) 08:24:02 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.023072588Z level=info msg="Executing migration" id="add dashboard_uid column to alert_rule" 08:24:02 policy-db-migrator | -------------- 08:24:02 kafka | [2024-04-26 08:22:03,682] INFO [Partition policy-pdp-pap-0 broker=1] No checkpointed highwatermark is found for partition policy-pdp-pap-0 (kafka.cluster.Partition) 08:24:02 policy-pap | sasl.kerberos.service.name = null 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.027471943Z level=info msg="Migration successfully executed" id="add dashboard_uid column to alert_rule" duration=4.399535ms 08:24:02 policy-db-migrator | 08:24:02 kafka | [2024-04-26 08:22:03,684] INFO [Partition policy-pdp-pap-0 broker=1] Log loaded for partition policy-pdp-pap-0 with initial high watermark 0 (kafka.cluster.Partition) 08:24:02 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.041558711Z level=info msg="Executing migration" id="add panel_id column to alert_rule" 08:24:02 policy-db-migrator | 08:24:02 kafka | [2024-04-26 08:22:03,687] INFO [Broker id=1] Leader policy-pdp-pap-0 with topic id Some(JNNo8CVWSdWgRv4ouhjw3w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 08:24:02 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.047480341Z level=info msg="Migration successfully executed" id="add panel_id column to alert_rule" duration=5.92153ms 08:24:02 policy-db-migrator | > upgrade 0170-jpapdpstatistics_enginestats.sql 08:24:02 kafka | [2024-04-26 08:22:03,696] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) 08:24:02 policy-pap | sasl.login.callback.handler.class = null 08:24:02 policy-db-migrator | -------------- 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.050331841Z level=info msg="Executing migration" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" 08:24:02 kafka | [2024-04-26 08:22:03,700] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 08:24:02 policy-pap | sasl.login.class = null 08:24:02 policy-db-migrator | UPDATE jpapdpstatistics_enginestats a 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.051180242Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" duration=848.041µs 08:24:02 kafka | [2024-04-26 08:22:03,701] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 08:24:02 policy-pap | sasl.login.connect.timeout.ms = null 08:24:02 policy-db-migrator | JOIN pdpstatistics b 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.054250033Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule" 08:24:02 kafka | [2024-04-26 08:22:03,701] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 08:24:02 policy-pap | sasl.login.read.timeout.ms = null 08:24:02 policy-db-migrator | ON a.name = b.name AND a.version = b.version AND a.timeStamp = b.timeStamp 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.059434626Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule" duration=5.184733ms 08:24:02 kafka | [2024-04-26 08:22:03,701] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 08:24:02 policy-pap | sasl.login.refresh.buffer.seconds = 300 08:24:02 policy-db-migrator | SET a.id = b.id 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.064968106Z level=info msg="Executing migration" id="add is_paused column to alert_rule table" 08:24:02 kafka | [2024-04-26 08:22:03,701] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 08:24:02 policy-pap | sasl.login.refresh.min.period.seconds = 60 08:24:02 policy-db-migrator | -------------- 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.069583092Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule table" duration=4.614616ms 08:24:02 kafka | [2024-04-26 08:22:03,701] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 08:24:02 policy-pap | sasl.login.refresh.window.factor = 0.8 08:24:02 policy-db-migrator | 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.072235121Z level=info msg="Executing migration" id="fix is_paused column for alert_rule table" 08:24:02 kafka | [2024-04-26 08:22:03,701] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 08:24:02 policy-pap | sasl.login.refresh.window.jitter = 0.05 08:24:02 policy-db-migrator | 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.072384259Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule table" duration=148.728µs 08:24:02 kafka | [2024-04-26 08:22:03,701] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 08:24:02 policy-pap | sasl.login.retry.backoff.max.ms = 10000 08:24:02 policy-db-migrator | > upgrade 0180-jpapdpstatistics_enginestats.sql 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.079469355Z level=info msg="Executing migration" id="create alert_rule_version table" 08:24:02 kafka | [2024-04-26 08:22:03,701] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 08:24:02 policy-pap | sasl.login.retry.backoff.ms = 100 08:24:02 policy-db-migrator | -------------- 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.080832052Z level=info msg="Migration successfully executed" id="create alert_rule_version table" duration=1.366097ms 08:24:02 kafka | [2024-04-26 08:22:03,701] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 08:24:02 policy-pap | sasl.mechanism = GSSAPI 08:24:02 policy-db-migrator | ALTER TABLE jpapdpstatistics_enginestats DROP COLUMN timeStamp 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.084680019Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" 08:24:02 kafka | [2024-04-26 08:22:03,701] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 08:24:02 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 08:24:02 policy-db-migrator | -------------- 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.085727101Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=1.047732ms 08:24:02 kafka | [2024-04-26 08:22:03,702] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 08:24:02 policy-pap | sasl.oauthbearer.expected.audience = null 08:24:02 policy-db-migrator | 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.08878946Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" 08:24:02 kafka | [2024-04-26 08:22:03,702] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 08:24:02 policy-pap | sasl.oauthbearer.expected.issuer = null 08:24:02 policy-db-migrator | 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.089879954Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" duration=1.090404ms 08:24:02 kafka | [2024-04-26 08:22:03,702] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 08:24:02 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 08:24:02 policy-db-migrator | > upgrade 0190-jpapolicyaudit.sql 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.094626005Z level=info msg="Executing migration" id="alter alert_rule_version table data column to mediumtext in mysql" 08:24:02 kafka | [2024-04-26 08:22:03,702] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 08:24:02 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 08:24:02 policy-db-migrator | -------------- 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.094696648Z level=info msg="Migration successfully executed" id="alter alert_rule_version table data column to mediumtext in mysql" duration=71.003µs 08:24:02 kafka | [2024-04-26 08:22:03,702] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 08:24:02 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 08:24:02 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapolicyaudit (ACTION INT DEFAULT NULL, PDPGROUP VARCHAR(255) NULL, PDPTYPE VARCHAR(255) NULL, TIMESTAMP datetime DEFAULT NULL, USER VARCHAR(255) NULL, ID BIGINT NOT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_JPAPOLICYAUDIT (ID, name, version)) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.09799291Z level=info msg="Executing migration" id="add column for to alert_rule_version" 08:24:02 kafka | [2024-04-26 08:22:03,702] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 08:24:02 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 08:24:02 policy-db-migrator | -------------- 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.104159711Z level=info msg="Migration successfully executed" id="add column for to alert_rule_version" duration=6.166681ms 08:24:02 kafka | [2024-04-26 08:22:03,702] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 08:24:02 policy-pap | sasl.oauthbearer.scope.claim.name = scope 08:24:02 policy-db-migrator | 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.113938349Z level=info msg="Executing migration" id="add column annotations to alert_rule_version" 08:24:02 kafka | [2024-04-26 08:22:03,702] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 08:24:02 policy-pap | sasl.oauthbearer.sub.claim.name = sub 08:24:02 policy-db-migrator | 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.118663271Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule_version" duration=4.727931ms 08:24:02 kafka | [2024-04-26 08:22:03,703] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 08:24:02 policy-pap | sasl.oauthbearer.token.endpoint.url = null 08:24:02 policy-db-migrator | > upgrade 0200-JpaPolicyAuditIndex_timestamp.sql 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.121787293Z level=info msg="Executing migration" id="add column labels to alert_rule_version" 08:24:02 kafka | [2024-04-26 08:22:03,703] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 08:24:02 policy-pap | security.protocol = PLAINTEXT 08:24:02 policy-db-migrator | -------------- 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.126460131Z level=info msg="Migration successfully executed" id="add column labels to alert_rule_version" duration=4.672108ms 08:24:02 kafka | [2024-04-26 08:22:03,703] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 08:24:02 policy-db-migrator | CREATE INDEX JpaPolicyAuditIndex_timestamp ON jpapolicyaudit(TIMESTAMP) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.130491659Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule_version" 08:24:02 kafka | [2024-04-26 08:22:03,703] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 08:24:02 policy-pap | security.providers = null 08:24:02 policy-db-migrator | -------------- 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.134880503Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule_version" duration=4.391324ms 08:24:02 kafka | [2024-04-26 08:22:03,704] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 08:24:02 policy-pap | send.buffer.bytes = 131072 08:24:02 policy-db-migrator | 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.145948494Z level=info msg="Executing migration" id="add is_paused column to alert_rule_versions table" 08:24:02 kafka | [2024-04-26 08:22:03,704] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 08:24:02 policy-pap | session.timeout.ms = 45000 08:24:02 policy-db-migrator | 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.150800251Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule_versions table" duration=4.854297ms 08:24:02 kafka | [2024-04-26 08:22:03,704] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 08:24:02 policy-pap | socket.connection.setup.timeout.max.ms = 30000 08:24:02 policy-db-migrator | > upgrade 0210-sequence.sql 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.154014028Z level=info msg="Executing migration" id="fix is_paused column for alert_rule_version table" 08:24:02 kafka | [2024-04-26 08:22:03,704] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 08:24:02 policy-pap | socket.connection.setup.timeout.ms = 10000 08:24:02 policy-db-migrator | -------------- 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.154108972Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule_version table" duration=94.484µs 08:24:02 kafka | [2024-04-26 08:22:03,705] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 08:24:02 policy-pap | ssl.cipher.suites = null 08:24:02 policy-db-migrator | CREATE TABLE IF NOT EXISTS sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.160336777Z level=info msg="Executing migration" id=create_alert_configuration_table 08:24:02 kafka | [2024-04-26 08:22:03,705] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 08:24:02 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 08:24:02 policy-db-migrator | -------------- 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.161478323Z level=info msg="Migration successfully executed" id=create_alert_configuration_table duration=1.142807ms 08:24:02 kafka | [2024-04-26 08:22:03,705] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 08:24:02 policy-pap | ssl.endpoint.identification.algorithm = https 08:24:02 policy-db-migrator | 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.16654579Z level=info msg="Executing migration" id="Add column default in alert_configuration" 08:24:02 kafka | [2024-04-26 08:22:03,705] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 08:24:02 policy-pap | ssl.engine.factory.class = null 08:24:02 policy-db-migrator | 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.174249887Z level=info msg="Migration successfully executed" id="Add column default in alert_configuration" duration=7.703537ms 08:24:02 kafka | [2024-04-26 08:22:03,705] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 08:24:02 policy-pap | ssl.key.password = null 08:24:02 policy-db-migrator | > upgrade 0220-sequence.sql 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.182473839Z level=info msg="Executing migration" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" 08:24:02 kafka | [2024-04-26 08:22:03,705] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 08:24:02 policy-pap | ssl.keymanager.algorithm = SunX509 08:24:02 policy-db-migrator | -------------- 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.182525681Z level=info msg="Migration successfully executed" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" duration=52.392µs 08:24:02 kafka | [2024-04-26 08:22:03,705] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 08:24:02 policy-pap | ssl.keystore.certificate.chain = null 08:24:02 policy-db-migrator | INSERT INTO sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM pdpstatistics)) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.185770019Z level=info msg="Executing migration" id="add column org_id in alert_configuration" 08:24:02 kafka | [2024-04-26 08:22:03,705] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 08:24:02 policy-pap | ssl.keystore.key = null 08:24:02 policy-db-migrator | -------------- 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.195981789Z level=info msg="Migration successfully executed" id="add column org_id in alert_configuration" duration=10.21242ms 08:24:02 kafka | [2024-04-26 08:22:03,705] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 08:24:02 policy-pap | ssl.keystore.location = null 08:24:02 policy-db-migrator | 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.20275209Z level=info msg="Executing migration" id="add index in alert_configuration table on org_id column" 08:24:02 kafka | [2024-04-26 08:22:03,705] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 08:24:02 policy-pap | ssl.keystore.password = null 08:24:02 policy-db-migrator | 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.203711566Z level=info msg="Migration successfully executed" id="add index in alert_configuration table on org_id column" duration=961.806µs 08:24:02 policy-pap | ssl.keystore.type = JKS 08:24:02 kafka | [2024-04-26 08:22:03,706] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 08:24:02 policy-db-migrator | > upgrade 0100-jpatoscapolicy_targets.sql 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.208543693Z level=info msg="Executing migration" id="add configuration_hash column to alert_configuration" 08:24:02 policy-pap | ssl.protocol = TLSv1.3 08:24:02 kafka | [2024-04-26 08:22:03,706] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 08:24:02 policy-db-migrator | -------------- 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.214906194Z level=info msg="Migration successfully executed" id="add configuration_hash column to alert_configuration" duration=6.3621ms 08:24:02 policy-pap | ssl.provider = null 08:24:02 kafka | [2024-04-26 08:22:03,706] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 08:24:02 policy-db-migrator | ALTER TABLE jpatoscapolicy_targets ADD COLUMN toscaPolicyName VARCHAR(120) NOT NULL, ADD COLUMN toscaPolicyVersion VARCHAR(20) NOT NULL, ADD CONSTRAINT PK_JPATOSCAPOLICY_TARGETS PRIMARY KEY (toscaPolicyName, toscaPolicyVersion) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.219193633Z level=info msg="Executing migration" id=create_ngalert_configuration_table 08:24:02 policy-pap | ssl.secure.random.implementation = null 08:24:02 kafka | [2024-04-26 08:22:03,706] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 08:24:02 policy-db-migrator | -------------- 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.219972001Z level=info msg="Migration successfully executed" id=create_ngalert_configuration_table duration=778.108µs 08:24:02 policy-pap | ssl.trustmanager.algorithm = PKIX 08:24:02 kafka | [2024-04-26 08:22:03,706] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 08:24:02 policy-db-migrator | 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.22382986Z level=info msg="Executing migration" id="add index in ngalert_configuration on org_id column" 08:24:02 policy-pap | ssl.truststore.certificates = null 08:24:02 kafka | [2024-04-26 08:22:03,706] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 08:24:02 policy-db-migrator | 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.224771266Z level=info msg="Migration successfully executed" id="add index in ngalert_configuration on org_id column" duration=940.636µs 08:24:02 policy-pap | ssl.truststore.location = null 08:24:02 kafka | [2024-04-26 08:22:03,706] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 08:24:02 policy-db-migrator | > upgrade 0110-jpatoscapolicytype_targets.sql 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.227897499Z level=info msg="Executing migration" id="add column send_alerts_to in ngalert_configuration" 08:24:02 policy-pap | ssl.truststore.password = null 08:24:02 kafka | [2024-04-26 08:22:03,706] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 08:24:02 policy-db-migrator | -------------- 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.234720272Z level=info msg="Migration successfully executed" id="add column send_alerts_to in ngalert_configuration" duration=6.822352ms 08:24:02 policy-pap | ssl.truststore.type = JKS 08:24:02 kafka | [2024-04-26 08:22:03,706] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 08:24:02 policy-db-migrator | ALTER TABLE jpatoscapolicytype_targets ADD COLUMN toscaPolicyTypeName VARCHAR(120) NOT NULL, ADD COLUMN toscaPolicyTypeVersion VARCHAR(20) NOT NULL, ADD CONSTRAINT PK_JPATOSCAPOLICYTYPE_TARGETS PRIMARY KEY (toscaPolicyTypeName, toscaPolicyTypeVersion) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.238224473Z level=info msg="Executing migration" id="create provenance_type table" 08:24:02 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 08:24:02 kafka | [2024-04-26 08:22:03,722] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 08:24:02 policy-db-migrator | -------------- 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.23916699Z level=info msg="Migration successfully executed" id="create provenance_type table" duration=941.966µs 08:24:02 policy-pap | 08:24:02 kafka | [2024-04-26 08:22:03,722] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 08:24:02 policy-db-migrator | 08:24:02 policy-db-migrator | 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.248007921Z level=info msg="Executing migration" id="add index to uniquify (record_key, record_type, org_id) columns" 08:24:02 policy-pap | [2024-04-26T08:22:02.946+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 08:24:02 kafka | [2024-04-26 08:22:03,722] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 08:24:02 policy-db-migrator | > upgrade 0120-toscatrigger.sql 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.248984899Z level=info msg="Migration successfully executed" id="add index to uniquify (record_key, record_type, org_id) columns" duration=976.708µs 08:24:02 policy-pap | [2024-04-26T08:22:02.946+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 08:24:02 kafka | [2024-04-26 08:22:03,722] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 08:24:02 policy-db-migrator | -------------- 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.255374562Z level=info msg="Executing migration" id="create alert_image table" 08:24:02 policy-pap | [2024-04-26T08:22:02.946+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1714119722946 08:24:02 kafka | [2024-04-26 08:22:03,722] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-13 (state.change.logger) 08:24:02 policy-db-migrator | DROP TABLE IF EXISTS toscatrigger 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.255970911Z level=info msg="Migration successfully executed" id="create alert_image table" duration=596.23µs 08:24:02 policy-pap | [2024-04-26T08:22:02.946+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap 08:24:02 kafka | [2024-04-26 08:22:03,722] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-46 (state.change.logger) 08:24:02 policy-db-migrator | -------------- 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.258902224Z level=info msg="Executing migration" id="add unique index on token to alert_image table" 08:24:02 policy-pap | [2024-04-26T08:22:02.947+00:00|INFO|ServiceManager|main] Policy PAP starting topics 08:24:02 kafka | [2024-04-26 08:22:03,722] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-9 (state.change.logger) 08:24:02 policy-db-migrator | 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.259584117Z level=info msg="Migration successfully executed" id="add unique index on token to alert_image table" duration=679.453µs 08:24:02 kafka | [2024-04-26 08:22:03,723] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-42 (state.change.logger) 08:24:02 policy-db-migrator | 08:24:02 policy-pap | [2024-04-26T08:22:02.947+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=eb01c65d-170a-46d6-9ba7-54033f13f8dc, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-heartbeat,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.262545342Z level=info msg="Executing migration" id="support longer URLs in alert_image table" 08:24:02 kafka | [2024-04-26 08:22:03,723] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-21 (state.change.logger) 08:24:02 policy-db-migrator | > upgrade 0130-jpatoscapolicytype_triggers.sql 08:24:02 policy-pap | [2024-04-26T08:22:02.947+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=db954cd2-8764-4a44-90af-3bb7f2069f83, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.262593794Z level=info msg="Migration successfully executed" id="support longer URLs in alert_image table" duration=49.042µs 08:24:02 kafka | [2024-04-26 08:22:03,723] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-17 (state.change.logger) 08:24:02 policy-db-migrator | -------------- 08:24:02 policy-pap | [2024-04-26T08:22:02.947+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=b203f36e-6d55-43c2-9716-adbeab74f0e0, alive=false, publisher=null]]: starting 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.268155246Z level=info msg="Executing migration" id=create_alert_configuration_history_table 08:24:02 kafka | [2024-04-26 08:22:03,723] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-30 (state.change.logger) 08:24:02 policy-db-migrator | ALTER TABLE jpatoscapolicytype_triggers MODIFY COLUMN triggers LONGBLOB 08:24:02 policy-pap | [2024-04-26T08:22:02.961+00:00|INFO|ProducerConfig|main] ProducerConfig values: 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.269206318Z level=info msg="Migration successfully executed" id=create_alert_configuration_history_table duration=1.050762ms 08:24:02 kafka | [2024-04-26 08:22:03,711] INFO [Broker id=1] Finished LeaderAndIsr request in 192ms correlationId 1 from controller 1 for 1 partitions (state.change.logger) 08:24:02 policy-db-migrator | -------------- 08:24:02 policy-pap | acks = -1 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.273625603Z level=info msg="Executing migration" id="drop non-unique orgID index on alert_configuration" 08:24:02 kafka | [2024-04-26 08:22:03,723] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-26 (state.change.logger) 08:24:02 policy-db-migrator | 08:24:02 policy-pap | auto.include.jmx.reporter = true 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.275403541Z level=info msg="Migration successfully executed" id="drop non-unique orgID index on alert_configuration" duration=1.776978ms 08:24:02 kafka | [2024-04-26 08:22:03,723] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-5 (state.change.logger) 08:24:02 policy-db-migrator | 08:24:02 policy-pap | batch.size = 16384 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.279965793Z level=info msg="Executing migration" id="drop unique orgID index on alert_configuration if exists" 08:24:02 policy-db-migrator | > upgrade 0140-toscaparameter.sql 08:24:02 kafka | [2024-04-26 08:22:03,723] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-38 (state.change.logger) 08:24:02 policy-pap | bootstrap.servers = [kafka:9092] 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.280745501Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop unique orgID index on alert_configuration if exists" 08:24:02 policy-db-migrator | -------------- 08:24:02 kafka | [2024-04-26 08:22:03,723] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-1 (state.change.logger) 08:24:02 policy-pap | buffer.memory = 33554432 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.286731384Z level=info msg="Executing migration" id="extract alertmanager configuration history to separate table" 08:24:02 policy-db-migrator | DROP TABLE IF EXISTS toscaparameter 08:24:02 kafka | [2024-04-26 08:22:03,724] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-34 (state.change.logger) 08:24:02 policy-pap | client.dns.lookup = use_all_dns_ips 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.287198276Z level=info msg="Migration successfully executed" id="extract alertmanager configuration history to separate table" duration=465.322µs 08:24:02 policy-db-migrator | -------------- 08:24:02 kafka | [2024-04-26 08:22:03,724] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-16 (state.change.logger) 08:24:02 policy-pap | client.id = producer-1 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.292687635Z level=info msg="Executing migration" id="add unique index on orgID to alert_configuration" 08:24:02 policy-db-migrator | 08:24:02 kafka | [2024-04-26 08:22:03,724] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-45 (state.change.logger) 08:24:02 policy-pap | compression.type = none 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.29381644Z level=info msg="Migration successfully executed" id="add unique index on orgID to alert_configuration" duration=1.129105ms 08:24:02 policy-db-migrator | 08:24:02 kafka | [2024-04-26 08:22:03,724] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-12 (state.change.logger) 08:24:02 policy-pap | connections.max.idle.ms = 540000 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.298145512Z level=info msg="Executing migration" id="add last_applied column to alert_configuration_history" 08:24:02 policy-db-migrator | > upgrade 0150-toscaproperty.sql 08:24:02 kafka | [2024-04-26 08:22:03,727] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-41 (state.change.logger) 08:24:02 policy-pap | delivery.timeout.ms = 120000 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.304936754Z level=info msg="Migration successfully executed" id="add last_applied column to alert_configuration_history" duration=6.790852ms 08:24:02 policy-db-migrator | -------------- 08:24:02 kafka | [2024-04-26 08:22:03,727] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=JNNo8CVWSdWgRv4ouhjw3w, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 1 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 08:24:02 policy-pap | enable.idempotence = true 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.31058311Z level=info msg="Executing migration" id="create library_element table v1" 08:24:02 policy-db-migrator | DROP TABLE IF EXISTS jpatoscaproperty_constraints 08:24:02 kafka | [2024-04-26 08:22:03,727] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-24 (state.change.logger) 08:24:02 policy-pap | interceptor.classes = [] 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.311391069Z level=info msg="Migration successfully executed" id="create library_element table v1" duration=808.869µs 08:24:02 policy-db-migrator | -------------- 08:24:02 kafka | [2024-04-26 08:22:03,727] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-20 (state.change.logger) 08:24:02 policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.315861387Z level=info msg="Executing migration" id="add index library_element org_id-folder_id-name-kind" 08:24:02 policy-db-migrator | 08:24:02 kafka | [2024-04-26 08:22:03,728] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-49 (state.change.logger) 08:24:02 policy-pap | linger.ms = 0 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.316655306Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_id-name-kind" duration=794.009µs 08:24:02 policy-db-migrator | -------------- 08:24:02 kafka | [2024-04-26 08:22:03,728] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-0 (state.change.logger) 08:24:02 policy-pap | max.block.ms = 60000 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.320528946Z level=info msg="Executing migration" id="create library_element_connection table v1" 08:24:02 policy-db-migrator | DROP TABLE IF EXISTS jpatoscaproperty_metadata 08:24:02 kafka | [2024-04-26 08:22:03,728] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-29 (state.change.logger) 08:24:02 policy-pap | max.in.flight.requests.per.connection = 5 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.321462501Z level=info msg="Migration successfully executed" id="create library_element_connection table v1" duration=932.745µs 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.33492199Z level=info msg="Executing migration" id="add index library_element_connection element_id-kind-connection_id" 08:24:02 kafka | [2024-04-26 08:22:03,728] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-25 (state.change.logger) 08:24:02 policy-pap | max.request.size = 1048576 08:24:02 policy-db-migrator | -------------- 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.337057224Z level=info msg="Migration successfully executed" id="add index library_element_connection element_id-kind-connection_id" duration=2.137245ms 08:24:02 kafka | [2024-04-26 08:22:03,729] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-8 (state.change.logger) 08:24:02 policy-pap | metadata.max.age.ms = 300000 08:24:02 policy-db-migrator | 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.342861977Z level=info msg="Executing migration" id="add unique index library_element org_id_uid" 08:24:02 kafka | [2024-04-26 08:22:03,729] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-37 (state.change.logger) 08:24:02 policy-pap | metadata.max.idle.ms = 300000 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.344115689Z level=info msg="Migration successfully executed" id="add unique index library_element org_id_uid" duration=1.257022ms 08:24:02 kafka | [2024-04-26 08:22:03,729] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-4 (state.change.logger) 08:24:02 policy-db-migrator | -------------- 08:24:02 policy-pap | metric.reporters = [] 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.347070453Z level=info msg="Executing migration" id="increase max description length to 2048" 08:24:02 kafka | [2024-04-26 08:22:03,729] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-33 (state.change.logger) 08:24:02 policy-db-migrator | DROP TABLE IF EXISTS toscaproperty 08:24:02 policy-pap | metrics.num.samples = 2 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.347124735Z level=info msg="Migration successfully executed" id="increase max description length to 2048" duration=54.982µs 08:24:02 kafka | [2024-04-26 08:22:03,729] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-15 (state.change.logger) 08:24:02 policy-db-migrator | -------------- 08:24:02 policy-pap | metrics.recording.level = INFO 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.34947468Z level=info msg="Executing migration" id="alter library_element model to mediumtext" 08:24:02 policy-db-migrator | 08:24:02 kafka | [2024-04-26 08:22:03,729] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-48 (state.change.logger) 08:24:02 policy-pap | metrics.sample.window.ms = 30000 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.34966448Z level=info msg="Migration successfully executed" id="alter library_element model to mediumtext" duration=189.41µs 08:24:02 policy-db-migrator | 08:24:02 kafka | [2024-04-26 08:22:03,729] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-11 (state.change.logger) 08:24:02 policy-pap | partitioner.adaptive.partitioning.enable = true 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.352870416Z level=info msg="Executing migration" id="clone move dashboard alerts to unified alerting" 08:24:02 policy-db-migrator | > upgrade 0160-jpapolicyaudit_pk.sql 08:24:02 kafka | [2024-04-26 08:22:03,729] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-44 (state.change.logger) 08:24:02 policy-pap | partitioner.availability.timeout.ms = 0 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.353294387Z level=info msg="Migration successfully executed" id="clone move dashboard alerts to unified alerting" duration=423.781µs 08:24:02 policy-db-migrator | -------------- 08:24:02 kafka | [2024-04-26 08:22:03,729] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-23 (state.change.logger) 08:24:02 policy-pap | partitioner.class = null 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.356305294Z level=info msg="Executing migration" id="create data_keys table" 08:24:02 policy-db-migrator | ALTER TABLE jpapolicyaudit DROP PRIMARY KEY 08:24:02 kafka | [2024-04-26 08:22:03,729] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-19 (state.change.logger) 08:24:02 policy-pap | partitioner.ignore.keys = false 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.357127214Z level=info msg="Migration successfully executed" id="create data_keys table" duration=821.86µs 08:24:02 policy-db-migrator | -------------- 08:24:02 kafka | [2024-04-26 08:22:03,729] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-32 (state.change.logger) 08:24:02 policy-pap | receive.buffer.bytes = 32768 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.360052627Z level=info msg="Executing migration" id="create secrets table" 08:24:02 policy-db-migrator | 08:24:02 kafka | [2024-04-26 08:22:03,729] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-28 (state.change.logger) 08:24:02 policy-pap | reconnect.backoff.max.ms = 1000 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.36072968Z level=info msg="Migration successfully executed" id="create secrets table" duration=679.243µs 08:24:02 policy-db-migrator | -------------- 08:24:02 kafka | [2024-04-26 08:22:03,730] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-7 (state.change.logger) 08:24:02 policy-pap | reconnect.backoff.ms = 50 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.3674722Z level=info msg="Executing migration" id="rename data_keys name column to id" 08:24:02 policy-db-migrator | ALTER TABLE jpapolicyaudit ADD CONSTRAINT PK_JPAPOLICYAUDIT PRIMARY KEY (ID) 08:24:02 kafka | [2024-04-26 08:22:03,730] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-40 (state.change.logger) 08:24:02 policy-pap | request.timeout.ms = 30000 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.400941616Z level=info msg="Migration successfully executed" id="rename data_keys name column to id" duration=33.469056ms 08:24:02 policy-db-migrator | -------------- 08:24:02 kafka | [2024-04-26 08:22:03,730] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-3 (state.change.logger) 08:24:02 policy-pap | retries = 2147483647 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.406121178Z level=info msg="Executing migration" id="add name column into data_keys" 08:24:02 policy-db-migrator | 08:24:02 policy-pap | retry.backoff.ms = 100 08:24:02 policy-db-migrator | 08:24:02 kafka | [2024-04-26 08:22:03,730] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-36 (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.412794295Z level=info msg="Migration successfully executed" id="add name column into data_keys" duration=6.671617ms 08:24:02 policy-pap | sasl.client.callback.handler.class = null 08:24:02 policy-db-migrator | > upgrade 0170-pdpstatistics_pk.sql 08:24:02 kafka | [2024-04-26 08:22:03,730] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-47 (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.418700263Z level=info msg="Executing migration" id="copy data_keys id column values into name" 08:24:02 policy-pap | sasl.jaas.config = null 08:24:02 policy-db-migrator | -------------- 08:24:02 kafka | [2024-04-26 08:22:03,730] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-14 (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.418917724Z level=info msg="Migration successfully executed" id="copy data_keys id column values into name" duration=217.34µs 08:24:02 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 08:24:02 policy-db-migrator | ALTER TABLE pdpstatistics DROP PRIMARY KEY 08:24:02 kafka | [2024-04-26 08:22:03,730] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-43 (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.422054117Z level=info msg="Executing migration" id="rename data_keys name column to label" 08:24:02 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 08:24:02 policy-db-migrator | -------------- 08:24:02 kafka | [2024-04-26 08:22:03,730] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-10 (state.change.logger) 08:24:02 policy-pap | sasl.kerberos.service.name = null 08:24:02 policy-db-migrator | 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.453440351Z level=info msg="Migration successfully executed" id="rename data_keys name column to label" duration=31.385844ms 08:24:02 kafka | [2024-04-26 08:22:03,731] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-22 (state.change.logger) 08:24:02 policy-db-migrator | -------------- 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.459211063Z level=info msg="Executing migration" id="rename data_keys id column back to name" 08:24:02 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 08:24:02 kafka | [2024-04-26 08:22:03,731] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-18 (state.change.logger) 08:24:02 policy-db-migrator | ALTER TABLE pdpstatistics ADD CONSTRAINT PK_PDPSTATISTICS PRIMARY KEY (ID) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.495663495Z level=info msg="Migration successfully executed" id="rename data_keys id column back to name" duration=36.454011ms 08:24:02 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 08:24:02 kafka | [2024-04-26 08:22:03,731] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-31 (state.change.logger) 08:24:02 policy-db-migrator | -------------- 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.528048787Z level=info msg="Executing migration" id="create kv_store table v1" 08:24:02 policy-pap | sasl.login.callback.handler.class = null 08:24:02 kafka | [2024-04-26 08:22:03,731] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-27 (state.change.logger) 08:24:02 policy-db-migrator | 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.528867838Z level=info msg="Migration successfully executed" id="create kv_store table v1" duration=819.691µs 08:24:02 policy-pap | sasl.login.class = null 08:24:02 kafka | [2024-04-26 08:22:03,731] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-39 (state.change.logger) 08:24:02 policy-db-migrator | 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.53362923Z level=info msg="Executing migration" id="add index kv_store.org_id-namespace-key" 08:24:02 policy-pap | sasl.login.connect.timeout.ms = null 08:24:02 policy-db-migrator | > upgrade 0180-jpatoscanodetemplate_metadata.sql 08:24:02 kafka | [2024-04-26 08:22:03,731] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-6 (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.534898052Z level=info msg="Migration successfully executed" id="add index kv_store.org_id-namespace-key" duration=1.274913ms 08:24:02 policy-pap | sasl.login.read.timeout.ms = null 08:24:02 policy-db-migrator | -------------- 08:24:02 kafka | [2024-04-26 08:22:03,731] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-35 (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.542782937Z level=info msg="Executing migration" id="update dashboard_uid and panel_id from existing annotations" 08:24:02 policy-pap | sasl.login.refresh.buffer.seconds = 300 08:24:02 policy-db-migrator | ALTER TABLE jpatoscanodetemplate_metadata MODIFY COLUMN METADATA LONGTEXT 08:24:02 kafka | [2024-04-26 08:22:03,731] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-2 (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.543009669Z level=info msg="Migration successfully executed" id="update dashboard_uid and panel_id from existing annotations" duration=227.222µs 08:24:02 policy-pap | sasl.login.refresh.min.period.seconds = 60 08:24:02 policy-db-migrator | -------------- 08:24:02 kafka | [2024-04-26 08:22:03,731] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 50 become-leader and 0 become-follower partitions (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.550619861Z level=info msg="Executing migration" id="create permission table" 08:24:02 policy-pap | sasl.login.refresh.window.factor = 0.8 08:24:02 policy-db-migrator | 08:24:02 kafka | [2024-04-26 08:22:03,732] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 50 partitions (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.551531245Z level=info msg="Migration successfully executed" id="create permission table" duration=911.774µs 08:24:02 policy-pap | sasl.login.refresh.window.jitter = 0.05 08:24:02 policy-db-migrator | 08:24:02 kafka | [2024-04-26 08:22:03,737] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NewReplica to OnlineReplica (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.555637276Z level=info msg="Executing migration" id="add unique index permission.role_id" 08:24:02 policy-pap | sasl.login.retry.backoff.max.ms = 10000 08:24:02 policy-db-migrator | > upgrade 0100-upgrade.sql 08:24:02 kafka | [2024-04-26 08:22:03,738] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NewReplica to OnlineReplica (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.556611753Z level=info msg="Migration successfully executed" id="add unique index permission.role_id" duration=974.807µs 08:24:02 policy-pap | sasl.login.retry.backoff.ms = 100 08:24:02 policy-db-migrator | -------------- 08:24:02 kafka | [2024-04-26 08:22:03,738] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NewReplica to OnlineReplica (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.561261721Z level=info msg="Executing migration" id="add unique index role_id_action_scope" 08:24:02 policy-pap | sasl.mechanism = GSSAPI 08:24:02 policy-db-migrator | select 'upgrade to 1100 completed' as msg 08:24:02 kafka | [2024-04-26 08:22:03,738] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NewReplica to OnlineReplica (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.562028678Z level=info msg="Migration successfully executed" id="add unique index role_id_action_scope" duration=766.777µs 08:24:02 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 08:24:02 policy-db-migrator | -------------- 08:24:02 kafka | [2024-04-26 08:22:03,738] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NewReplica to OnlineReplica (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.564972672Z level=info msg="Executing migration" id="create role table" 08:24:02 policy-pap | sasl.oauthbearer.expected.audience = null 08:24:02 policy-db-migrator | 08:24:02 kafka | [2024-04-26 08:22:03,738] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NewReplica to OnlineReplica (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.565597762Z level=info msg="Migration successfully executed" id="create role table" duration=625.12µs 08:24:02 policy-pap | sasl.oauthbearer.expected.issuer = null 08:24:02 policy-db-migrator | msg 08:24:02 kafka | [2024-04-26 08:22:03,738] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NewReplica to OnlineReplica (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.570546504Z level=info msg="Executing migration" id="add column display_name" 08:24:02 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 08:24:02 policy-db-migrator | upgrade to 1100 completed 08:24:02 kafka | [2024-04-26 08:22:03,738] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NewReplica to OnlineReplica (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.576014942Z level=info msg="Migration successfully executed" id="add column display_name" duration=5.469038ms 08:24:02 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 08:24:02 policy-db-migrator | 08:24:02 kafka | [2024-04-26 08:22:03,738] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NewReplica to OnlineReplica (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.579893361Z level=info msg="Executing migration" id="add column group_name" 08:24:02 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 08:24:02 policy-db-migrator | > upgrade 0100-jpapolicyaudit_renameuser.sql 08:24:02 kafka | [2024-04-26 08:22:03,738] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NewReplica to OnlineReplica (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.58498262Z level=info msg="Migration successfully executed" id="add column group_name" duration=5.087179ms 08:24:02 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 08:24:02 policy-db-migrator | -------------- 08:24:02 kafka | [2024-04-26 08:22:03,738] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NewReplica to OnlineReplica (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.588293941Z level=info msg="Executing migration" id="add index role.org_id" 08:24:02 policy-pap | sasl.oauthbearer.scope.claim.name = scope 08:24:02 policy-db-migrator | ALTER TABLE jpapolicyaudit RENAME COLUMN USER TO USERNAME 08:24:02 kafka | [2024-04-26 08:22:03,738] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NewReplica to OnlineReplica (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.58927916Z level=info msg="Migration successfully executed" id="add index role.org_id" duration=985.069µs 08:24:02 policy-pap | sasl.oauthbearer.sub.claim.name = sub 08:24:02 policy-db-migrator | -------------- 08:24:02 kafka | [2024-04-26 08:22:03,739] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NewReplica to OnlineReplica (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.593672544Z level=info msg="Executing migration" id="add unique index role_org_id_name" 08:24:02 policy-pap | sasl.oauthbearer.token.endpoint.url = null 08:24:02 policy-db-migrator | 08:24:02 kafka | [2024-04-26 08:22:03,739] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NewReplica to OnlineReplica (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.594758788Z level=info msg="Migration successfully executed" id="add unique index role_org_id_name" duration=1.089524ms 08:24:02 policy-pap | security.protocol = PLAINTEXT 08:24:02 policy-db-migrator | 08:24:02 kafka | [2024-04-26 08:22:03,739] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NewReplica to OnlineReplica (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.598808405Z level=info msg="Executing migration" id="add index role_org_id_uid" 08:24:02 policy-pap | security.providers = null 08:24:02 policy-db-migrator | > upgrade 0110-idx_tsidx1.sql 08:24:02 kafka | [2024-04-26 08:22:03,739] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NewReplica to OnlineReplica (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.599836346Z level=info msg="Migration successfully executed" id="add index role_org_id_uid" duration=1.028101ms 08:24:02 policy-pap | send.buffer.bytes = 131072 08:24:02 policy-db-migrator | -------------- 08:24:02 kafka | [2024-04-26 08:22:03,739] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NewReplica to OnlineReplica (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.608634085Z level=info msg="Executing migration" id="create team role table" 08:24:02 policy-db-migrator | DROP INDEX IDX_TSIDX1 ON pdpstatistics 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.609971201Z level=info msg="Migration successfully executed" id="create team role table" duration=1.336956ms 08:24:02 policy-pap | socket.connection.setup.timeout.max.ms = 30000 08:24:02 kafka | [2024-04-26 08:22:03,739] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NewReplica to OnlineReplica (state.change.logger) 08:24:02 policy-db-migrator | -------------- 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.614769435Z level=info msg="Executing migration" id="add index team_role.org_id" 08:24:02 policy-pap | socket.connection.setup.timeout.ms = 10000 08:24:02 kafka | [2024-04-26 08:22:03,739] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NewReplica to OnlineReplica (state.change.logger) 08:24:02 policy-db-migrator | 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.616435737Z level=info msg="Migration successfully executed" id="add index team_role.org_id" duration=1.666062ms 08:24:02 policy-pap | ssl.cipher.suites = null 08:24:02 kafka | [2024-04-26 08:22:03,739] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NewReplica to OnlineReplica (state.change.logger) 08:24:02 policy-db-migrator | -------------- 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.621233041Z level=info msg="Executing migration" id="add unique index team_role_org_id_team_id_role_id" 08:24:02 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 08:24:02 kafka | [2024-04-26 08:22:03,739] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NewReplica to OnlineReplica (state.change.logger) 08:24:02 policy-db-migrator | CREATE INDEX IDXTSIDX1 ON pdpstatistics(timeStamp, name, version) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.623323553Z level=info msg="Migration successfully executed" id="add unique index team_role_org_id_team_id_role_id" duration=2.089842ms 08:24:02 policy-pap | ssl.endpoint.identification.algorithm = https 08:24:02 kafka | [2024-04-26 08:22:03,739] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NewReplica to OnlineReplica (state.change.logger) 08:24:02 policy-db-migrator | -------------- 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.626973882Z level=info msg="Executing migration" id="add index team_role.team_id" 08:24:02 policy-pap | ssl.engine.factory.class = null 08:24:02 kafka | [2024-04-26 08:22:03,739] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NewReplica to OnlineReplica (state.change.logger) 08:24:02 policy-db-migrator | 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.628886105Z level=info msg="Migration successfully executed" id="add index team_role.team_id" duration=1.912213ms 08:24:02 policy-pap | ssl.key.password = null 08:24:02 kafka | [2024-04-26 08:22:03,739] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NewReplica to OnlineReplica (state.change.logger) 08:24:02 policy-db-migrator | 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.632195887Z level=info msg="Executing migration" id="create user role table" 08:24:02 policy-pap | ssl.keymanager.algorithm = SunX509 08:24:02 kafka | [2024-04-26 08:22:03,740] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NewReplica to OnlineReplica (state.change.logger) 08:24:02 policy-db-migrator | > upgrade 0120-audit_sequence.sql 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.633020737Z level=info msg="Migration successfully executed" id="create user role table" duration=824.8µs 08:24:02 policy-pap | ssl.keystore.certificate.chain = null 08:24:02 kafka | [2024-04-26 08:22:03,740] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NewReplica to OnlineReplica (state.change.logger) 08:24:02 policy-db-migrator | -------------- 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.637071105Z level=info msg="Executing migration" id="add index user_role.org_id" 08:24:02 policy-pap | ssl.keystore.key = null 08:24:02 kafka | [2024-04-26 08:22:03,740] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NewReplica to OnlineReplica (state.change.logger) 08:24:02 policy-db-migrator | CREATE TABLE IF NOT EXISTS audit_sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.638069695Z level=info msg="Migration successfully executed" id="add index user_role.org_id" duration=998.26µs 08:24:02 policy-pap | ssl.keystore.location = null 08:24:02 kafka | [2024-04-26 08:22:03,740] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NewReplica to OnlineReplica (state.change.logger) 08:24:02 policy-db-migrator | -------------- 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.641359065Z level=info msg="Executing migration" id="add unique index user_role_org_id_user_id_role_id" 08:24:02 policy-pap | ssl.keystore.password = null 08:24:02 kafka | [2024-04-26 08:22:03,740] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NewReplica to OnlineReplica (state.change.logger) 08:24:02 policy-db-migrator | 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.642453019Z level=info msg="Migration successfully executed" id="add unique index user_role_org_id_user_id_role_id" duration=1.093184ms 08:24:02 policy-pap | ssl.keystore.type = JKS 08:24:02 kafka | [2024-04-26 08:22:03,740] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NewReplica to OnlineReplica (state.change.logger) 08:24:02 policy-db-migrator | -------------- 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.649300404Z level=info msg="Executing migration" id="add index user_role.user_id" 08:24:02 policy-pap | ssl.protocol = TLSv1.3 08:24:02 kafka | [2024-04-26 08:22:03,740] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NewReplica to OnlineReplica (state.change.logger) 08:24:02 policy-db-migrator | INSERT INTO audit_sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM jpapolicyaudit)) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.651011417Z level=info msg="Migration successfully executed" id="add index user_role.user_id" duration=1.706973ms 08:24:02 policy-pap | ssl.provider = null 08:24:02 kafka | [2024-04-26 08:22:03,740] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NewReplica to OnlineReplica (state.change.logger) 08:24:02 policy-db-migrator | -------------- 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.656105425Z level=info msg="Executing migration" id="create builtin role table" 08:24:02 policy-pap | ssl.secure.random.implementation = null 08:24:02 kafka | [2024-04-26 08:22:03,740] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NewReplica to OnlineReplica (state.change.logger) 08:24:02 policy-db-migrator | 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.657390498Z level=info msg="Migration successfully executed" id="create builtin role table" duration=1.284953ms 08:24:02 policy-pap | ssl.trustmanager.algorithm = PKIX 08:24:02 kafka | [2024-04-26 08:22:03,740] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NewReplica to OnlineReplica (state.change.logger) 08:24:02 policy-db-migrator | 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.66233018Z level=info msg="Executing migration" id="add index builtin_role.role_id" 08:24:02 policy-pap | ssl.truststore.certificates = null 08:24:02 kafka | [2024-04-26 08:22:03,740] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition policy-pdp-pap-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.664011303Z level=info msg="Migration successfully executed" id="add index builtin_role.role_id" duration=1.681432ms 08:24:02 policy-db-migrator | > upgrade 0130-statistics_sequence.sql 08:24:02 kafka | [2024-04-26 08:22:03,740] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NewReplica to OnlineReplica (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.66805901Z level=info msg="Executing migration" id="add index builtin_role.name" 08:24:02 policy-db-migrator | -------------- 08:24:02 policy-pap | ssl.truststore.location = null 08:24:02 kafka | [2024-04-26 08:22:03,740] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NewReplica to OnlineReplica (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.66907179Z level=info msg="Migration successfully executed" id="add index builtin_role.name" duration=1.01406ms 08:24:02 policy-db-migrator | CREATE TABLE IF NOT EXISTS statistics_sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) 08:24:02 policy-pap | ssl.truststore.password = null 08:24:02 kafka | [2024-04-26 08:22:03,740] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NewReplica to OnlineReplica (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.673564199Z level=info msg="Executing migration" id="Add column org_id to builtin_role table" 08:24:02 policy-db-migrator | -------------- 08:24:02 policy-pap | ssl.truststore.type = JKS 08:24:02 kafka | [2024-04-26 08:22:03,741] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NewReplica to OnlineReplica (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.684919203Z level=info msg="Migration successfully executed" id="Add column org_id to builtin_role table" duration=11.356665ms 08:24:02 policy-db-migrator | 08:24:02 policy-pap | transaction.timeout.ms = 60000 08:24:02 kafka | [2024-04-26 08:22:03,741] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NewReplica to OnlineReplica (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.688587413Z level=info msg="Executing migration" id="add index builtin_role.org_id" 08:24:02 policy-db-migrator | -------------- 08:24:02 policy-pap | transactional.id = null 08:24:02 kafka | [2024-04-26 08:22:03,741] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NewReplica to OnlineReplica (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.689312439Z level=info msg="Migration successfully executed" id="add index builtin_role.org_id" duration=724.646µs 08:24:02 policy-db-migrator | INSERT INTO statistics_sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM pdpstatistics)) 08:24:02 policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer 08:24:02 kafka | [2024-04-26 08:22:03,741] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NewReplica to OnlineReplica (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.692497924Z level=info msg="Executing migration" id="add unique index builtin_role_org_id_role_id_role" 08:24:02 policy-db-migrator | -------------- 08:24:02 policy-pap | 08:24:02 kafka | [2024-04-26 08:22:03,741] INFO [Broker id=1] Add 1 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.693534375Z level=info msg="Migration successfully executed" id="add unique index builtin_role_org_id_role_id_role" duration=1.035971ms 08:24:02 policy-db-migrator | 08:24:02 kafka | [2024-04-26 08:22:03,741] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NewReplica to OnlineReplica (state.change.logger) 08:24:02 policy-db-migrator | -------------- 08:24:02 policy-pap | [2024-04-26T08:22:02.971+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.697479527Z level=info msg="Executing migration" id="Remove unique index role_org_id_uid" 08:24:02 kafka | [2024-04-26 08:22:03,741] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NewReplica to OnlineReplica (state.change.logger) 08:24:02 policy-db-migrator | TRUNCATE TABLE sequence 08:24:02 policy-pap | [2024-04-26T08:22:02.985+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 08:24:02 kafka | [2024-04-26 08:22:03,741] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NewReplica to OnlineReplica (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.698498228Z level=info msg="Migration successfully executed" id="Remove unique index role_org_id_uid" duration=1.018861ms 08:24:02 policy-db-migrator | -------------- 08:24:02 policy-pap | [2024-04-26T08:22:02.985+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 08:24:02 kafka | [2024-04-26 08:22:03,741] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NewReplica to OnlineReplica (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.702767636Z level=info msg="Executing migration" id="add unique index role.uid" 08:24:02 policy-db-migrator | 08:24:02 policy-pap | [2024-04-26T08:22:02.985+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1714119722984 08:24:02 kafka | [2024-04-26 08:22:03,741] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NewReplica to OnlineReplica (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.704009847Z level=info msg="Migration successfully executed" id="add unique index role.uid" duration=1.242081ms 08:24:02 policy-db-migrator | 08:24:02 policy-pap | [2024-04-26T08:22:02.985+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=b203f36e-6d55-43c2-9716-adbeab74f0e0, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created 08:24:02 kafka | [2024-04-26 08:22:03,742] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 2 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.707318398Z level=info msg="Executing migration" id="create seed assignment table" 08:24:02 policy-db-migrator | > upgrade 0100-pdpstatistics.sql 08:24:02 policy-pap | [2024-04-26T08:22:02.985+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=9596763a-349e-4441-886a-d80f8a74994a, alive=false, publisher=null]]: starting 08:24:02 kafka | [2024-04-26 08:22:03,742] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NewReplica to OnlineReplica (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.708089137Z level=info msg="Migration successfully executed" id="create seed assignment table" duration=770.369µs 08:24:02 policy-db-migrator | -------------- 08:24:02 policy-pap | [2024-04-26T08:22:02.986+00:00|INFO|ProducerConfig|main] ProducerConfig values: 08:24:02 kafka | [2024-04-26 08:22:03,742] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NewReplica to OnlineReplica (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.713372604Z level=info msg="Executing migration" id="add unique index builtin_role_role_name" 08:24:02 policy-db-migrator | DROP INDEX IDXTSIDX1 ON pdpstatistics 08:24:02 policy-pap | acks = -1 08:24:02 kafka | [2024-04-26 08:22:03,742] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NewReplica to OnlineReplica (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.714456618Z level=info msg="Migration successfully executed" id="add unique index builtin_role_role_name" duration=1.083584ms 08:24:02 policy-db-migrator | -------------- 08:24:02 policy-pap | auto.include.jmx.reporter = true 08:24:02 kafka | [2024-04-26 08:22:03,742] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NewReplica to OnlineReplica (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.717553459Z level=info msg="Executing migration" id="add column hidden to role table" 08:24:02 policy-db-migrator | 08:24:02 policy-pap | batch.size = 16384 08:24:02 kafka | [2024-04-26 08:22:03,742] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.728854951Z level=info msg="Migration successfully executed" id="add column hidden to role table" duration=11.241259ms 08:24:02 policy-db-migrator | -------------- 08:24:02 policy-pap | bootstrap.servers = [kafka:9092] 08:24:02 kafka | [2024-04-26 08:22:03,748] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 for 50 partitions (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.731533962Z level=info msg="Executing migration" id="permission kind migration" 08:24:02 policy-db-migrator | DROP TABLE pdpstatistics 08:24:02 policy-pap | buffer.memory = 33554432 08:24:02 kafka | [2024-04-26 08:22:03,750] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.737814009Z level=info msg="Migration successfully executed" id="permission kind migration" duration=6.279057ms 08:24:02 policy-db-migrator | -------------- 08:24:02 policy-pap | client.dns.lookup = use_all_dns_ips 08:24:02 kafka | [2024-04-26 08:22:03,750] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.742863285Z level=info msg="Executing migration" id="permission attribute migration" 08:24:02 policy-pap | client.id = producer-2 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.750909349Z level=info msg="Migration successfully executed" id="permission attribute migration" duration=8.045534ms 08:24:02 policy-db-migrator | 08:24:02 kafka | [2024-04-26 08:22:03,757] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 08:24:02 policy-pap | compression.type = none 08:24:02 policy-db-migrator | 08:24:02 kafka | [2024-04-26 08:22:03,757] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.757149054Z level=info msg="Executing migration" id="permission identifier migration" 08:24:02 policy-pap | connections.max.idle.ms = 540000 08:24:02 policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql 08:24:02 kafka | [2024-04-26 08:22:03,757] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.765243519Z level=info msg="Migration successfully executed" id="permission identifier migration" duration=8.093695ms 08:24:02 policy-pap | delivery.timeout.ms = 120000 08:24:02 policy-db-migrator | -------------- 08:24:02 kafka | [2024-04-26 08:22:03,757] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.768436796Z level=info msg="Executing migration" id="add permission identifier index" 08:24:02 policy-pap | enable.idempotence = true 08:24:02 kafka | [2024-04-26 08:22:03,758] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.769469776Z level=info msg="Migration successfully executed" id="add permission identifier index" duration=1.03255ms 08:24:02 policy-pap | interceptor.classes = [] 08:24:02 policy-db-migrator | DROP TABLE jpapdpstatistics_enginestats 08:24:02 kafka | [2024-04-26 08:22:03,758] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.774521102Z level=info msg="Executing migration" id="add permission action scope role_id index" 08:24:02 policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer 08:24:02 policy-db-migrator | -------------- 08:24:02 kafka | [2024-04-26 08:22:03,758] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.77568737Z level=info msg="Migration successfully executed" id="add permission action scope role_id index" duration=1.165728ms 08:24:02 policy-pap | linger.ms = 0 08:24:02 policy-db-migrator | 08:24:02 kafka | [2024-04-26 08:22:03,758] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.779879455Z level=info msg="Executing migration" id="remove permission role_id action scope index" 08:24:02 policy-pap | max.block.ms = 60000 08:24:02 policy-db-migrator | 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.7814274Z level=info msg="Migration successfully executed" id="remove permission role_id action scope index" duration=1.553996ms 08:24:02 kafka | [2024-04-26 08:22:03,758] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 08:24:02 policy-pap | max.in.flight.requests.per.connection = 5 08:24:02 policy-db-migrator | > upgrade 0120-statistics_sequence.sql 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.786386452Z level=info msg="Executing migration" id="create query_history table v1" 08:24:02 kafka | [2024-04-26 08:22:03,758] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 08:24:02 policy-pap | max.request.size = 1048576 08:24:02 policy-db-migrator | -------------- 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.787840753Z level=info msg="Migration successfully executed" id="create query_history table v1" duration=1.454091ms 08:24:02 kafka | [2024-04-26 08:22:03,758] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 08:24:02 policy-pap | metadata.max.age.ms = 300000 08:24:02 policy-db-migrator | DROP TABLE statistics_sequence 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.792492651Z level=info msg="Executing migration" id="add index query_history.org_id-created_by-datasource_uid" 08:24:02 kafka | [2024-04-26 08:22:03,758] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 08:24:02 policy-pap | metadata.max.idle.ms = 300000 08:24:02 policy-db-migrator | -------------- 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.794755601Z level=info msg="Migration successfully executed" id="add index query_history.org_id-created_by-datasource_uid" duration=2.26525ms 08:24:02 kafka | [2024-04-26 08:22:03,758] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 08:24:02 policy-db-migrator | 08:24:02 kafka | [2024-04-26 08:22:03,759] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 08:24:02 policy-pap | metric.reporters = [] 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.800937194Z level=info msg="Executing migration" id="alter table query_history alter column created_by type to bigint" 08:24:02 kafka | [2024-04-26 08:22:03,759] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 08:24:02 policy-pap | metrics.num.samples = 2 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.801093781Z level=info msg="Migration successfully executed" id="alter table query_history alter column created_by type to bigint" duration=156.727µs 08:24:02 policy-db-migrator | policyadmin: OK: upgrade (1300) 08:24:02 kafka | [2024-04-26 08:22:03,759] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 08:24:02 policy-pap | metrics.recording.level = INFO 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.804432025Z level=info msg="Executing migration" id="rbac disabled migrator" 08:24:02 policy-db-migrator | name version 08:24:02 kafka | [2024-04-26 08:22:03,759] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 08:24:02 policy-pap | metrics.sample.window.ms = 30000 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.804474046Z level=info msg="Migration successfully executed" id="rbac disabled migrator" duration=42.842µs 08:24:02 policy-db-migrator | policyadmin 1300 08:24:02 kafka | [2024-04-26 08:22:03,759] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 08:24:02 policy-pap | partitioner.adaptive.partitioning.enable = true 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.80802121Z level=info msg="Executing migration" id="teams permissions migration" 08:24:02 policy-db-migrator | ID script operation from_version to_version tag success atTime 08:24:02 kafka | [2024-04-26 08:22:03,759] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 08:24:02 policy-pap | partitioner.availability.timeout.ms = 0 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.808658531Z level=info msg="Migration successfully executed" id="teams permissions migration" duration=644.942µs 08:24:02 policy-db-migrator | 1 0100-jpapdpgroup_properties.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:33 08:24:02 kafka | [2024-04-26 08:22:03,759] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 08:24:02 policy-pap | partitioner.class = null 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.811890709Z level=info msg="Executing migration" id="dashboard permissions" 08:24:02 policy-db-migrator | 2 0110-jpapdpstatistics_enginestats.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:33 08:24:02 kafka | [2024-04-26 08:22:03,759] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 08:24:02 policy-pap | partitioner.ignore.keys = false 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.812885648Z level=info msg="Migration successfully executed" id="dashboard permissions" duration=996.348µs 08:24:02 policy-db-migrator | 3 0120-jpapdpsubgroup_policies.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:33 08:24:02 kafka | [2024-04-26 08:22:03,760] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 08:24:02 policy-pap | receive.buffer.bytes = 32768 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.816138717Z level=info msg="Executing migration" id="dashboard permissions uid scopes" 08:24:02 policy-db-migrator | 4 0130-jpapdpsubgroup_properties.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:33 08:24:02 kafka | [2024-04-26 08:22:03,760] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 08:24:02 policy-pap | reconnect.backoff.max.ms = 1000 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.817280342Z level=info msg="Migration successfully executed" id="dashboard permissions uid scopes" duration=1.141885ms 08:24:02 policy-db-migrator | 5 0140-jpapdpsubgroup_supportedpolicytypes.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:33 08:24:02 kafka | [2024-04-26 08:22:03,760] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 08:24:02 policy-pap | reconnect.backoff.ms = 50 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.821707349Z level=info msg="Executing migration" id="drop managed folder create actions" 08:24:02 policy-db-migrator | 6 0150-jpatoscacapabilityassignment_attributes.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:33 08:24:02 kafka | [2024-04-26 08:22:03,760] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.822034675Z level=info msg="Migration successfully executed" id="drop managed folder create actions" duration=327.726µs 08:24:02 policy-pap | request.timeout.ms = 30000 08:24:02 policy-db-migrator | 7 0160-jpatoscacapabilityassignment_metadata.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:33 08:24:02 kafka | [2024-04-26 08:22:03,760] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.82480671Z level=info msg="Executing migration" id="alerting notification permissions" 08:24:02 policy-pap | retries = 2147483647 08:24:02 policy-db-migrator | 8 0170-jpatoscacapabilityassignment_occurrences.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:33 08:24:02 kafka | [2024-04-26 08:22:03,760] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.82542297Z level=info msg="Migration successfully executed" id="alerting notification permissions" duration=616.14µs 08:24:02 policy-pap | retry.backoff.ms = 100 08:24:02 policy-db-migrator | 9 0180-jpatoscacapabilityassignment_properties.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:33 08:24:02 kafka | [2024-04-26 08:22:03,760] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.828137563Z level=info msg="Executing migration" id="create query_history_star table v1" 08:24:02 policy-pap | sasl.client.callback.handler.class = null 08:24:02 policy-db-migrator | 10 0190-jpatoscacapabilitytype_metadata.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:33 08:24:02 kafka | [2024-04-26 08:22:03,760] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.829005125Z level=info msg="Migration successfully executed" id="create query_history_star table v1" duration=866.892µs 08:24:02 policy-pap | sasl.jaas.config = null 08:24:02 policy-db-migrator | 11 0200-jpatoscacapabilitytype_properties.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:33 08:24:02 kafka | [2024-04-26 08:22:03,760] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 08:24:02 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 08:24:02 policy-db-migrator | 12 0210-jpatoscadatatype_constraints.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:33 08:24:02 kafka | [2024-04-26 08:22:03,761] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 08:24:02 policy-db-migrator | 13 0220-jpatoscadatatype_metadata.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:33 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.834239552Z level=info msg="Executing migration" id="add index query_history.user_id-query_uid" 08:24:02 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 08:24:02 kafka | [2024-04-26 08:22:03,761] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 08:24:02 policy-db-migrator | 14 0230-jpatoscadatatype_properties.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:34 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.83562929Z level=info msg="Migration successfully executed" id="add index query_history.user_id-query_uid" duration=1.388607ms 08:24:02 kafka | [2024-04-26 08:22:03,761] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 08:24:02 policy-pap | sasl.kerberos.service.name = null 08:24:02 kafka | [2024-04-26 08:22:03,761] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 08:24:02 policy-db-migrator | 15 0240-jpatoscanodetemplate_metadata.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:34 08:24:02 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.84281174Z level=info msg="Executing migration" id="add column org_id in query_history_star" 08:24:02 kafka | [2024-04-26 08:22:03,761] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 08:24:02 policy-db-migrator | 16 0250-jpatoscanodetemplate_properties.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:34 08:24:02 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.853548775Z level=info msg="Migration successfully executed" id="add column org_id in query_history_star" duration=10.737975ms 08:24:02 policy-db-migrator | 17 0260-jpatoscanodetype_metadata.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:34 08:24:02 kafka | [2024-04-26 08:22:03,761] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 08:24:02 policy-pap | sasl.login.callback.handler.class = null 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.858102558Z level=info msg="Executing migration" id="alter table query_history_star_mig column user_id type to bigint" 08:24:02 policy-db-migrator | 18 0270-jpatoscanodetype_properties.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:34 08:24:02 kafka | [2024-04-26 08:22:03,761] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.858177591Z level=info msg="Migration successfully executed" id="alter table query_history_star_mig column user_id type to bigint" duration=74.603µs 08:24:02 policy-db-migrator | 19 0280-jpatoscapolicy_metadata.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:34 08:24:02 policy-pap | sasl.login.class = null 08:24:02 kafka | [2024-04-26 08:22:03,761] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.861754436Z level=info msg="Executing migration" id="create correlation table v1" 08:24:02 policy-db-migrator | 20 0290-jpatoscapolicy_properties.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:34 08:24:02 policy-pap | sasl.login.connect.timeout.ms = null 08:24:02 kafka | [2024-04-26 08:22:03,762] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.862895162Z level=info msg="Migration successfully executed" id="create correlation table v1" duration=1.137196ms 08:24:02 policy-db-migrator | 21 0300-jpatoscapolicy_targets.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:34 08:24:02 policy-pap | sasl.login.read.timeout.ms = null 08:24:02 kafka | [2024-04-26 08:22:03,762] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.867524378Z level=info msg="Executing migration" id="add index correlations.uid" 08:24:02 policy-db-migrator | 22 0310-jpatoscapolicytype_metadata.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:34 08:24:02 policy-pap | sasl.login.refresh.buffer.seconds = 300 08:24:02 kafka | [2024-04-26 08:22:03,762] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.86879918Z level=info msg="Migration successfully executed" id="add index correlations.uid" duration=1.273962ms 08:24:02 policy-db-migrator | 23 0320-jpatoscapolicytype_properties.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:34 08:24:02 policy-pap | sasl.login.refresh.min.period.seconds = 60 08:24:02 kafka | [2024-04-26 08:22:03,762] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.874205665Z level=info msg="Executing migration" id="add index correlations.source_uid" 08:24:02 policy-db-migrator | 24 0330-jpatoscapolicytype_targets.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:34 08:24:02 policy-pap | sasl.login.refresh.window.factor = 0.8 08:24:02 kafka | [2024-04-26 08:22:03,762] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 08:24:02 policy-db-migrator | 25 0340-jpatoscapolicytype_triggers.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:34 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.876253435Z level=info msg="Migration successfully executed" id="add index correlations.source_uid" duration=2.04757ms 08:24:02 policy-pap | sasl.login.refresh.window.jitter = 0.05 08:24:02 kafka | [2024-04-26 08:22:03,762] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 08:24:02 policy-db-migrator | 26 0350-jpatoscaproperty_constraints.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:34 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.879787088Z level=info msg="Executing migration" id="add correlation config column" 08:24:02 policy-pap | sasl.login.retry.backoff.max.ms = 10000 08:24:02 kafka | [2024-04-26 08:22:03,762] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 08:24:02 policy-db-migrator | 27 0360-jpatoscaproperty_metadata.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:34 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.888300633Z level=info msg="Migration successfully executed" id="add correlation config column" duration=8.513146ms 08:24:02 policy-pap | sasl.login.retry.backoff.ms = 100 08:24:02 kafka | [2024-04-26 08:22:03,762] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 08:24:02 policy-db-migrator | 28 0370-jpatoscarelationshiptype_metadata.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:34 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.893643965Z level=info msg="Executing migration" id="drop index IDX_correlation_uid - v1" 08:24:02 policy-pap | sasl.mechanism = GSSAPI 08:24:02 kafka | [2024-04-26 08:22:03,762] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 08:24:02 policy-db-migrator | 29 0380-jpatoscarelationshiptype_properties.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:34 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.894762009Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_uid - v1" duration=1.118224ms 08:24:02 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 08:24:02 kafka | [2024-04-26 08:22:03,763] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 3 from controller 1 epoch 1 (state.change.logger) 08:24:02 policy-db-migrator | 30 0390-jpatoscarequirement_metadata.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:34 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.898949464Z level=info msg="Executing migration" id="drop index IDX_correlation_source_uid - v1" 08:24:02 policy-pap | sasl.oauthbearer.expected.audience = null 08:24:02 kafka | [2024-04-26 08:22:03,785] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-3 (state.change.logger) 08:24:02 policy-db-migrator | 31 0400-jpatoscarequirement_occurrences.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:34 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.899964344Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_source_uid - v1" duration=1.01537ms 08:24:02 policy-pap | sasl.oauthbearer.expected.issuer = null 08:24:02 kafka | [2024-04-26 08:22:03,785] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-18 (state.change.logger) 08:24:02 policy-db-migrator | 32 0410-jpatoscarequirement_properties.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:34 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.903027793Z level=info msg="Executing migration" id="Rename table correlation to correlation_tmp_qwerty - v1" 08:24:02 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 08:24:02 kafka | [2024-04-26 08:22:03,785] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-41 (state.change.logger) 08:24:02 policy-db-migrator | 33 0420-jpatoscaservicetemplate_metadata.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:34 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.927782593Z level=info msg="Migration successfully executed" id="Rename table correlation to correlation_tmp_qwerty - v1" duration=24.75462ms 08:24:02 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 08:24:02 kafka | [2024-04-26 08:22:03,785] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-10 (state.change.logger) 08:24:02 policy-db-migrator | 34 0430-jpatoscatopologytemplate_inputs.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:34 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.93221278Z level=info msg="Executing migration" id="create correlation v2" 08:24:02 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 08:24:02 kafka | [2024-04-26 08:22:03,786] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-33 (state.change.logger) 08:24:02 policy-db-migrator | 35 0440-pdpgroup_pdpsubgroup.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:35 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.933099353Z level=info msg="Migration successfully executed" id="create correlation v2" duration=886.033µs 08:24:02 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 08:24:02 kafka | [2024-04-26 08:22:03,786] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-48 (state.change.logger) 08:24:02 policy-db-migrator | 36 0450-pdpgroup.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:35 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.938809002Z level=info msg="Executing migration" id="create index IDX_correlation_uid - v2" 08:24:02 policy-pap | sasl.oauthbearer.scope.claim.name = scope 08:24:02 kafka | [2024-04-26 08:22:03,786] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-19 (state.change.logger) 08:24:02 policy-db-migrator | 37 0460-pdppolicystatus.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:35 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.939896975Z level=info msg="Migration successfully executed" id="create index IDX_correlation_uid - v2" duration=1.086283ms 08:24:02 policy-pap | sasl.oauthbearer.sub.claim.name = sub 08:24:02 kafka | [2024-04-26 08:22:03,786] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-34 (state.change.logger) 08:24:02 policy-db-migrator | 38 0470-pdp.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:35 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.946840555Z level=info msg="Executing migration" id="create index IDX_correlation_source_uid - v2" 08:24:02 policy-pap | sasl.oauthbearer.token.endpoint.url = null 08:24:02 kafka | [2024-04-26 08:22:03,786] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-4 (state.change.logger) 08:24:02 policy-db-migrator | 39 0480-pdpstatistics.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:35 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.948772558Z level=info msg="Migration successfully executed" id="create index IDX_correlation_source_uid - v2" duration=1.932143ms 08:24:02 policy-pap | security.protocol = PLAINTEXT 08:24:02 kafka | [2024-04-26 08:22:03,786] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-11 (state.change.logger) 08:24:02 policy-db-migrator | 40 0490-pdpsubgroup_pdp.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:35 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.954774512Z level=info msg="Executing migration" id="create index IDX_correlation_org_id - v2" 08:24:02 policy-pap | security.providers = null 08:24:02 kafka | [2024-04-26 08:22:03,786] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-26 (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.956202972Z level=info msg="Migration successfully executed" id="create index IDX_correlation_org_id - v2" duration=1.43213ms 08:24:02 policy-db-migrator | 41 0500-pdpsubgroup.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:35 08:24:02 policy-pap | send.buffer.bytes = 131072 08:24:02 kafka | [2024-04-26 08:22:03,786] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-49 (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.960287462Z level=info msg="Executing migration" id="copy correlation v1 to v2" 08:24:02 policy-db-migrator | 42 0510-toscacapabilityassignment.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:35 08:24:02 policy-pap | socket.connection.setup.timeout.max.ms = 30000 08:24:02 kafka | [2024-04-26 08:22:03,786] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-39 (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.960636508Z level=info msg="Migration successfully executed" id="copy correlation v1 to v2" duration=348.716µs 08:24:02 policy-db-migrator | 43 0520-toscacapabilityassignments.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:35 08:24:02 policy-pap | socket.connection.setup.timeout.ms = 10000 08:24:02 kafka | [2024-04-26 08:22:03,786] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-9 (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.965409622Z level=info msg="Executing migration" id="drop correlation_tmp_qwerty" 08:24:02 policy-db-migrator | 44 0530-toscacapabilityassignments_toscacapabilityassignment.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:35 08:24:02 policy-pap | ssl.cipher.suites = null 08:24:02 kafka | [2024-04-26 08:22:03,786] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-24 (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.966330706Z level=info msg="Migration successfully executed" id="drop correlation_tmp_qwerty" duration=920.544µs 08:24:02 policy-db-migrator | 45 0540-toscacapabilitytype.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:35 08:24:02 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 08:24:02 kafka | [2024-04-26 08:22:03,786] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-31 (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.970341323Z level=info msg="Executing migration" id="add provisioning column" 08:24:02 policy-db-migrator | 46 0550-toscacapabilitytypes.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:35 08:24:02 policy-pap | ssl.endpoint.identification.algorithm = https 08:24:02 kafka | [2024-04-26 08:22:03,786] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-46 (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.979364914Z level=info msg="Migration successfully executed" id="add provisioning column" duration=9.022231ms 08:24:02 policy-db-migrator | 47 0560-toscacapabilitytypes_toscacapabilitytype.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:35 08:24:02 policy-pap | ssl.engine.factory.class = null 08:24:02 kafka | [2024-04-26 08:22:03,786] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-1 (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.982439164Z level=info msg="Executing migration" id="create entity_events table" 08:24:02 policy-db-migrator | 48 0570-toscadatatype.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:35 08:24:02 policy-pap | ssl.key.password = null 08:24:02 kafka | [2024-04-26 08:22:03,786] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-16 (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.983611231Z level=info msg="Migration successfully executed" id="create entity_events table" duration=1.171797ms 08:24:02 policy-db-migrator | 49 0580-toscadatatypes.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:35 08:24:02 policy-pap | ssl.keymanager.algorithm = SunX509 08:24:02 kafka | [2024-04-26 08:22:03,786] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-2 (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.992085105Z level=info msg="Executing migration" id="create dashboard public config v1" 08:24:02 policy-db-migrator | 50 0590-toscadatatypes_toscadatatype.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:35 08:24:02 policy-pap | ssl.keystore.certificate.chain = null 08:24:02 kafka | [2024-04-26 08:22:03,786] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-25 (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.994164477Z level=info msg="Migration successfully executed" id="create dashboard public config v1" duration=2.078852ms 08:24:02 policy-db-migrator | 51 0600-toscanodetemplate.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:35 08:24:02 policy-pap | ssl.keystore.key = null 08:24:02 kafka | [2024-04-26 08:22:03,786] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-40 (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.998501699Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v1" 08:24:02 policy-db-migrator | 52 0610-toscanodetemplates.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:35 08:24:02 policy-pap | ssl.keystore.location = null 08:24:02 kafka | [2024-04-26 08:22:03,787] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-47 (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:36.999065076Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index UQE_dashboard_public_config_uid - v1" 08:24:02 policy-db-migrator | 53 0620-toscanodetemplates_toscanodetemplate.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:35 08:24:02 policy-pap | ssl.keystore.password = null 08:24:02 kafka | [2024-04-26 08:22:03,787] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-17 (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.002346457Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 08:24:02 policy-db-migrator | 54 0630-toscanodetype.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:35 08:24:02 policy-pap | ssl.keystore.type = JKS 08:24:02 kafka | [2024-04-26 08:22:03,787] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-32 (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.002874123Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 08:24:02 policy-db-migrator | 55 0640-toscanodetypes.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:35 08:24:02 policy-pap | ssl.protocol = TLSv1.3 08:24:02 kafka | [2024-04-26 08:22:03,787] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-37 (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.006134222Z level=info msg="Executing migration" id="Drop old dashboard public config table" 08:24:02 policy-db-migrator | 56 0650-toscanodetypes_toscanodetype.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:35 08:24:02 policy-pap | ssl.provider = null 08:24:02 kafka | [2024-04-26 08:22:03,787] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-7 (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.007061308Z level=info msg="Migration successfully executed" id="Drop old dashboard public config table" duration=926.236µs 08:24:02 policy-db-migrator | 57 0660-toscaparameter.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:35 08:24:02 policy-pap | ssl.secure.random.implementation = null 08:24:02 kafka | [2024-04-26 08:22:03,787] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-22 (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.01118492Z level=info msg="Executing migration" id="recreate dashboard public config v1" 08:24:02 policy-db-migrator | 58 0670-toscapolicies.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:36 08:24:02 policy-pap | ssl.trustmanager.algorithm = PKIX 08:24:02 kafka | [2024-04-26 08:22:03,787] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-29 (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.012426661Z level=info msg="Migration successfully executed" id="recreate dashboard public config v1" duration=1.23838ms 08:24:02 policy-db-migrator | 59 0680-toscapolicies_toscapolicy.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:36 08:24:02 policy-pap | ssl.truststore.certificates = null 08:24:02 kafka | [2024-04-26 08:22:03,787] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-44 (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.016297221Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v1" 08:24:02 policy-db-migrator | 60 0690-toscapolicy.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:36 08:24:02 policy-pap | ssl.truststore.location = null 08:24:02 kafka | [2024-04-26 08:22:03,788] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-14 (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.017496049Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v1" duration=1.199688ms 08:24:02 policy-db-migrator | 61 0700-toscapolicytype.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:36 08:24:02 policy-pap | ssl.truststore.password = null 08:24:02 kafka | [2024-04-26 08:22:03,788] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-23 (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.020878755Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 08:24:02 policy-db-migrator | 62 0710-toscapolicytypes.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:36 08:24:02 policy-pap | ssl.truststore.type = JKS 08:24:02 kafka | [2024-04-26 08:22:03,788] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-38 (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.022087314Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" duration=1.208449ms 08:24:02 policy-db-migrator | 63 0720-toscapolicytypes_toscapolicytype.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:36 08:24:02 policy-pap | transaction.timeout.ms = 60000 08:24:02 kafka | [2024-04-26 08:22:03,788] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-8 (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.026795525Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v2" 08:24:02 policy-db-migrator | 64 0730-toscaproperty.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:36 08:24:02 policy-pap | transactional.id = null 08:24:02 kafka | [2024-04-26 08:22:03,788] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-45 (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.027955781Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_public_config_uid - v2" duration=1.160096ms 08:24:02 policy-db-migrator | 65 0740-toscarelationshiptype.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:36 08:24:02 policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer 08:24:02 kafka | [2024-04-26 08:22:03,788] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-15 (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.031859233Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" 08:24:02 policy-db-migrator | 66 0750-toscarelationshiptypes.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:36 08:24:02 policy-pap | 08:24:02 kafka | [2024-04-26 08:22:03,788] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-30 (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.033356736Z level=info msg="Migration successfully executed" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.496423ms 08:24:02 policy-db-migrator | 67 0760-toscarelationshiptypes_toscarelationshiptype.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:36 08:24:02 policy-pap | [2024-04-26T08:22:02.986+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-2] Instantiated an idempotent producer. 08:24:02 kafka | [2024-04-26 08:22:03,788] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-0 (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.036620846Z level=info msg="Executing migration" id="Drop public config table" 08:24:02 policy-db-migrator | 68 0770-toscarequirement.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:36 08:24:02 policy-pap | [2024-04-26T08:22:02.989+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 08:24:02 kafka | [2024-04-26 08:22:03,789] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-35 (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.038249546Z level=info msg="Migration successfully executed" id="Drop public config table" duration=1.62767ms 08:24:02 policy-db-migrator | 69 0780-toscarequirements.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:36 08:24:02 policy-pap | [2024-04-26T08:22:02.989+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.042727326Z level=info msg="Executing migration" id="Recreate dashboard public config v2" 08:24:02 kafka | [2024-04-26 08:22:03,789] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-5 (state.change.logger) 08:24:02 policy-db-migrator | 70 0790-toscarequirements_toscarequirement.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:36 08:24:02 policy-pap | [2024-04-26T08:22:02.989+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1714119722989 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.043974056Z level=info msg="Migration successfully executed" id="Recreate dashboard public config v2" duration=1.24629ms 08:24:02 kafka | [2024-04-26 08:22:03,789] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-20 (state.change.logger) 08:24:02 policy-db-migrator | 71 0800-toscaservicetemplate.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:36 08:24:02 policy-pap | [2024-04-26T08:22:02.989+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=9596763a-349e-4441-886a-d80f8a74994a, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.04974291Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v2" 08:24:02 kafka | [2024-04-26 08:22:03,789] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-27 (state.change.logger) 08:24:02 policy-db-migrator | 72 0810-toscatopologytemplate.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:36 08:24:02 policy-pap | [2024-04-26T08:22:02.989+00:00|INFO|ServiceManager|main] Policy PAP starting PAP Activator 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.051717635Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v2" duration=1.974036ms 08:24:02 kafka | [2024-04-26 08:22:03,789] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-42 (state.change.logger) 08:24:02 policy-db-migrator | 73 0820-toscatrigger.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:36 08:24:02 policy-pap | [2024-04-26T08:22:02.989+00:00|INFO|ServiceManager|main] Policy PAP starting PDP publisher 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.055632798Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" 08:24:02 kafka | [2024-04-26 08:22:03,789] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-12 (state.change.logger) 08:24:02 policy-db-migrator | 74 0830-FK_ToscaNodeTemplate_capabilitiesName.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:36 08:24:02 policy-pap | [2024-04-26T08:22:02.991+00:00|INFO|ServiceManager|main] Policy PAP starting Policy Notification publisher 08:24:02 kafka | [2024-04-26 08:22:03,789] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-21 (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.056844887Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.211649ms 08:24:02 policy-db-migrator | 75 0840-FK_ToscaNodeTemplate_requirementsName.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:36 08:24:02 policy-pap | [2024-04-26T08:22:02.991+00:00|INFO|ServiceManager|main] Policy PAP starting PDP update timers 08:24:02 kafka | [2024-04-26 08:22:03,789] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-36 (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.060031053Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_access_token - v2" 08:24:02 policy-db-migrator | 76 0850-FK_ToscaNodeType_requirementsName.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:36 08:24:02 policy-pap | [2024-04-26T08:22:02.993+00:00|INFO|ServiceManager|main] Policy PAP starting PDP state-change timers 08:24:02 kafka | [2024-04-26 08:22:03,791] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-6 (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.06118425Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_access_token - v2" duration=1.153357ms 08:24:02 policy-db-migrator | 77 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:36 08:24:02 policy-pap | [2024-04-26T08:22:02.994+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification lock 08:24:02 kafka | [2024-04-26 08:22:03,796] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-43 (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.066816696Z level=info msg="Executing migration" id="Rename table dashboard_public_config to dashboard_public - v2" 08:24:02 policy-db-migrator | 78 0870-FK_ToscaServiceTemplate_dataTypesName.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:36 08:24:02 policy-pap | [2024-04-26T08:22:02.994+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification requests 08:24:02 kafka | [2024-04-26 08:22:03,796] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-13 (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.090759748Z level=info msg="Migration successfully executed" id="Rename table dashboard_public_config to dashboard_public - v2" duration=23.948103ms 08:24:02 policy-db-migrator | 79 0880-FK_ToscaServiceTemplate_nodeTypesName.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:36 08:24:02 policy-pap | [2024-04-26T08:22:02.994+00:00|INFO|TimerManager|Thread-9] timer manager update started 08:24:02 kafka | [2024-04-26 08:22:03,797] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 3 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-28 (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.095135673Z level=info msg="Executing migration" id="add annotations_enabled column" 08:24:02 policy-db-migrator | 80 0890-FK_ToscaServiceTemplate_policyTypesName.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:36 08:24:02 policy-pap | [2024-04-26T08:22:02.995+00:00|INFO|ServiceManager|main] Policy PAP starting PDP expiration timer 08:24:02 kafka | [2024-04-26 08:22:03,801] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions HashSet(__consumer_offsets-22, __consumer_offsets-30, __consumer_offsets-25, __consumer_offsets-35, __consumer_offsets-37, __consumer_offsets-38, __consumer_offsets-13, __consumer_offsets-8, __consumer_offsets-21, __consumer_offsets-4, __consumer_offsets-27, __consumer_offsets-7, __consumer_offsets-9, __consumer_offsets-46, __consumer_offsets-41, __consumer_offsets-33, __consumer_offsets-23, __consumer_offsets-49, __consumer_offsets-47, __consumer_offsets-16, __consumer_offsets-28, __consumer_offsets-31, __consumer_offsets-36, __consumer_offsets-42, __consumer_offsets-3, __consumer_offsets-18, __consumer_offsets-15, __consumer_offsets-24, __consumer_offsets-17, __consumer_offsets-48, __consumer_offsets-19, __consumer_offsets-11, __consumer_offsets-2, __consumer_offsets-43, __consumer_offsets-6, __consumer_offsets-14, __consumer_offsets-20, __consumer_offsets-0, __consumer_offsets-44, __consumer_offsets-39, __consumer_offsets-12, __consumer_offsets-45, __consumer_offsets-1, __consumer_offsets-5, __consumer_offsets-26, __consumer_offsets-29, __consumer_offsets-34, __consumer_offsets-10, __consumer_offsets-32, __consumer_offsets-40) (kafka.server.ReplicaFetcherManager) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.102605409Z level=info msg="Migration successfully executed" id="add annotations_enabled column" duration=7.468547ms 08:24:02 policy-db-migrator | 81 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:37 08:24:02 policy-pap | [2024-04-26T08:22:02.995+00:00|INFO|TimerManager|Thread-10] timer manager state-change started 08:24:02 kafka | [2024-04-26 08:22:03,802] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 3 from controller 1 epoch 1 as part of the become-leader transition for 50 partitions (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.106602855Z level=info msg="Executing migration" id="add time_selection_enabled column" 08:24:02 policy-db-migrator | 82 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:37 08:24:02 policy-pap | [2024-04-26T08:22:02.997+00:00|INFO|ServiceManager|main] Policy PAP started 08:24:02 kafka | [2024-04-26 08:22:03,808] INFO [LogLoader partition=__consumer_offsets-3, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.115408166Z level=info msg="Migration successfully executed" id="add time_selection_enabled column" duration=8.79667ms 08:24:02 policy-db-migrator | 83 0920-FK_ToscaTopologyTemplate_policyName.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:37 08:24:02 policy-pap | [2024-04-26T08:22:02.998+00:00|INFO|PolicyPapApplication|main] Started PolicyPapApplication in 9.787 seconds (process running for 10.378) 08:24:02 kafka | [2024-04-26 08:22:03,809] INFO Created log for partition __consumer_offsets-3 in /var/lib/kafka/data/__consumer_offsets-3 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.121312436Z level=info msg="Executing migration" id="delete orphaned public dashboards" 08:24:02 policy-db-migrator | 84 0940-PdpPolicyStatus_PdpGroup.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:37 08:24:02 kafka | [2024-04-26 08:22:03,809] INFO [Partition __consumer_offsets-3 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-3 (kafka.cluster.Partition) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.121549397Z level=info msg="Migration successfully executed" id="delete orphaned public dashboards" duration=234.952µs 08:24:02 policy-db-migrator | 85 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:37 08:24:02 kafka | [2024-04-26 08:22:03,810] INFO [Partition __consumer_offsets-3 broker=1] Log loaded for partition __consumer_offsets-3 with initial high watermark 0 (kafka.cluster.Partition) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.124578425Z level=info msg="Executing migration" id="add share column" 08:24:02 policy-db-migrator | 86 0960-FK_ToscaNodeTemplate_capabilitiesName.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:37 08:24:02 kafka | [2024-04-26 08:22:03,810] INFO [Broker id=1] Leader __consumer_offsets-3 with topic id Some(RfiyP89qRi-5ZTNhftzAtg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.136777693Z level=info msg="Migration successfully executed" id="add share column" duration=12.197578ms 08:24:02 policy-db-migrator | 87 0970-FK_ToscaNodeTemplate_requirementsName.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:37 08:24:02 policy-pap | [2024-04-26T08:22:03.402+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 2 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.140929987Z level=info msg="Executing migration" id="backfill empty share column fields with default of public" 08:24:02 policy-db-migrator | 88 0980-FK_ToscaNodeType_requirementsName.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:37 08:24:02 policy-pap | [2024-04-26T08:22:03.403+00:00|INFO|Metadata|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] Cluster ID: qUquThiHQAKlsircSK68zw 08:24:02 policy-pap | [2024-04-26T08:22:03.404+00:00|INFO|Metadata|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Cluster ID: qUquThiHQAKlsircSK68zw 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.141151927Z level=info msg="Migration successfully executed" id="backfill empty share column fields with default of public" duration=221.68µs 08:24:02 policy-db-migrator | 89 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:37 08:24:02 kafka | [2024-04-26 08:22:03,825] INFO [LogLoader partition=__consumer_offsets-18, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 08:24:02 policy-pap | [2024-04-26T08:22:03.405+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: qUquThiHQAKlsircSK68zw 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.144920451Z level=info msg="Executing migration" id="create file table" 08:24:02 policy-db-migrator | 90 1000-FK_ToscaServiceTemplate_dataTypesName.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:37 08:24:02 kafka | [2024-04-26 08:22:03,826] INFO Created log for partition __consumer_offsets-18 in /var/lib/kafka/data/__consumer_offsets-18 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 08:24:02 policy-pap | [2024-04-26T08:22:03.478+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-db954cd2-8764-4a44-90af-3bb7f2069f83-3, groupId=db954cd2-8764-4a44-90af-3bb7f2069f83] Error while fetching metadata with correlation id 2 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.146056887Z level=info msg="Migration successfully executed" id="create file table" duration=1.135766ms 08:24:02 policy-db-migrator | 91 1010-FK_ToscaServiceTemplate_nodeTypesName.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:37 08:24:02 kafka | [2024-04-26 08:22:03,826] INFO [Partition __consumer_offsets-18 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-18 (kafka.cluster.Partition) 08:24:02 policy-pap | [2024-04-26T08:22:03.478+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-db954cd2-8764-4a44-90af-3bb7f2069f83-3, groupId=db954cd2-8764-4a44-90af-3bb7f2069f83] Cluster ID: qUquThiHQAKlsircSK68zw 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.15183225Z level=info msg="Executing migration" id="file table idx: path natural pk" 08:24:02 policy-db-migrator | 92 1020-FK_ToscaServiceTemplate_policyTypesName.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:37 08:24:02 kafka | [2024-04-26 08:22:03,826] INFO [Partition __consumer_offsets-18 broker=1] Log loaded for partition __consumer_offsets-18 with initial high watermark 0 (kafka.cluster.Partition) 08:24:02 policy-pap | [2024-04-26T08:22:03.499+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 4 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.153188127Z level=info msg="Migration successfully executed" id="file table idx: path natural pk" duration=1.355877ms 08:24:02 policy-db-migrator | 93 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:37 08:24:02 kafka | [2024-04-26 08:22:03,826] INFO [Broker id=1] Leader __consumer_offsets-18 with topic id Some(RfiyP89qRi-5ZTNhftzAtg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 08:24:02 policy-pap | [2024-04-26T08:22:03.512+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] ProducerId set to 0 with epoch 0 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.157163742Z level=info msg="Executing migration" id="file table idx: parent_folder_path_hash fast folder retrieval" 08:24:02 policy-db-migrator | 94 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:37 08:24:02 kafka | [2024-04-26 08:22:03,838] INFO [LogLoader partition=__consumer_offsets-41, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 08:24:02 policy-pap | [2024-04-26T08:22:03.534+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 1 with epoch 0 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.158264896Z level=info msg="Migration successfully executed" id="file table idx: parent_folder_path_hash fast folder retrieval" duration=1.104725ms 08:24:02 policy-db-migrator | 95 1050-FK_ToscaTopologyTemplate_policyName.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:37 08:24:02 policy-db-migrator | 96 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql upgrade 0 0800 2604240821330800u 1 2024-04-26 08:21:37 08:24:02 policy-pap | [2024-04-26T08:22:03.605+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 6 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.162402808Z level=info msg="Executing migration" id="create file_meta table" 08:24:02 policy-db-migrator | 97 0100-pdp.sql upgrade 0800 0900 2604240821330900u 1 2024-04-26 08:21:37 08:24:02 policy-db-migrator | 98 0110-idx_tsidx1.sql upgrade 0800 0900 2604240821330900u 1 2024-04-26 08:21:37 08:24:02 policy-pap | [2024-04-26T08:22:03.606+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-db954cd2-8764-4a44-90af-3bb7f2069f83-3, groupId=db954cd2-8764-4a44-90af-3bb7f2069f83] Error while fetching metadata with correlation id 4 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 08:24:02 kafka | [2024-04-26 08:22:03,842] INFO Created log for partition __consumer_offsets-41 in /var/lib/kafka/data/__consumer_offsets-41 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.163183257Z level=info msg="Migration successfully executed" id="create file_meta table" duration=778.769µs 08:24:02 policy-db-migrator | 99 0120-pk_pdpstatistics.sql upgrade 0800 0900 2604240821330900u 1 2024-04-26 08:21:38 08:24:02 policy-pap | [2024-04-26T08:22:03.714+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-db954cd2-8764-4a44-90af-3bb7f2069f83-3, groupId=db954cd2-8764-4a44-90af-3bb7f2069f83] Error while fetching metadata with correlation id 6 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} 08:24:02 kafka | [2024-04-26 08:22:03,842] INFO [Partition __consumer_offsets-41 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-41 (kafka.cluster.Partition) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.166698439Z level=info msg="Executing migration" id="file table idx: path key" 08:24:02 policy-db-migrator | 100 0130-pdpstatistics.sql upgrade 0800 0900 2604240821330900u 1 2024-04-26 08:21:38 08:24:02 policy-pap | [2024-04-26T08:22:03.728+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 8 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 08:24:02 kafka | [2024-04-26 08:22:03,842] INFO [Partition __consumer_offsets-41 broker=1] Log loaded for partition __consumer_offsets-41 with initial high watermark 0 (kafka.cluster.Partition) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.167513539Z level=info msg="Migration successfully executed" id="file table idx: path key" duration=814.97µs 08:24:02 policy-db-migrator | 101 0140-pk_pdpstatistics.sql upgrade 0800 0900 2604240821330900u 1 2024-04-26 08:21:38 08:24:02 kafka | [2024-04-26 08:22:03,842] INFO [Broker id=1] Leader __consumer_offsets-41 with topic id Some(RfiyP89qRi-5ZTNhftzAtg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 08:24:02 policy-pap | [2024-04-26T08:22:04.455+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.17347431Z level=info msg="Executing migration" id="set path collation in file table" 08:24:02 policy-db-migrator | 102 0150-pdpstatistics.sql upgrade 0800 0900 2604240821330900u 1 2024-04-26 08:21:38 08:24:02 kafka | [2024-04-26 08:22:03,853] INFO [LogLoader partition=__consumer_offsets-10, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 08:24:02 policy-pap | [2024-04-26T08:22:04.461+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.173526563Z level=info msg="Migration successfully executed" id="set path collation in file table" duration=55.973µs 08:24:02 policy-db-migrator | 103 0160-jpapdpstatistics_enginestats.sql upgrade 0800 0900 2604240821330900u 1 2024-04-26 08:21:38 08:24:02 kafka | [2024-04-26 08:22:03,854] INFO Created log for partition __consumer_offsets-10 in /var/lib/kafka/data/__consumer_offsets-10 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 08:24:02 policy-pap | [2024-04-26T08:22:04.485+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: need to re-join with the given member-id: consumer-policy-pap-4-7cd46dae-964c-49b4-95ec-bd835d00b3b4 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.176310749Z level=info msg="Executing migration" id="migrate contents column to mediumblob for MySQL" 08:24:02 policy-db-migrator | 104 0170-jpapdpstatistics_enginestats.sql upgrade 0800 0900 2604240821330900u 1 2024-04-26 08:21:38 08:24:02 kafka | [2024-04-26 08:22:03,854] INFO [Partition __consumer_offsets-10 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-10 (kafka.cluster.Partition) 08:24:02 policy-pap | [2024-04-26T08:22:04.485+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) 08:24:02 policy-pap | [2024-04-26T08:22:04.485+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group 08:24:02 policy-db-migrator | 105 0180-jpapdpstatistics_enginestats.sql upgrade 0800 0900 2604240821330900u 1 2024-04-26 08:21:38 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.176361962Z level=info msg="Migration successfully executed" id="migrate contents column to mediumblob for MySQL" duration=51.343µs 08:24:02 kafka | [2024-04-26 08:22:03,854] INFO [Partition __consumer_offsets-10 broker=1] Log loaded for partition __consumer_offsets-10 with initial high watermark 0 (kafka.cluster.Partition) 08:24:02 policy-db-migrator | 106 0190-jpapolicyaudit.sql upgrade 0800 0900 2604240821330900u 1 2024-04-26 08:21:38 08:24:02 policy-pap | [2024-04-26T08:22:04.550+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-db954cd2-8764-4a44-90af-3bb7f2069f83-3, groupId=db954cd2-8764-4a44-90af-3bb7f2069f83] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.179207861Z level=info msg="Executing migration" id="managed permissions migration" 08:24:02 kafka | [2024-04-26 08:22:03,854] INFO [Broker id=1] Leader __consumer_offsets-10 with topic id Some(RfiyP89qRi-5ZTNhftzAtg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 08:24:02 policy-pap | [2024-04-26T08:22:04.553+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-db954cd2-8764-4a44-90af-3bb7f2069f83-3, groupId=db954cd2-8764-4a44-90af-3bb7f2069f83] (Re-)joining group 08:24:02 policy-db-migrator | 107 0200-JpaPolicyAuditIndex_timestamp.sql upgrade 0800 0900 2604240821330900u 1 2024-04-26 08:21:38 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.179753448Z level=info msg="Migration successfully executed" id="managed permissions migration" duration=544.907µs 08:24:02 kafka | [2024-04-26 08:22:03,926] INFO [LogLoader partition=__consumer_offsets-33, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 08:24:02 policy-pap | [2024-04-26T08:22:04.557+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-db954cd2-8764-4a44-90af-3bb7f2069f83-3, groupId=db954cd2-8764-4a44-90af-3bb7f2069f83] Request joining group due to: need to re-join with the given member-id: consumer-db954cd2-8764-4a44-90af-3bb7f2069f83-3-7fda7423-43b4-4275-9f43-73657387fac9 08:24:02 policy-db-migrator | 108 0210-sequence.sql upgrade 0800 0900 2604240821330900u 1 2024-04-26 08:21:38 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.184967204Z level=info msg="Executing migration" id="managed folder permissions alert actions migration" 08:24:02 kafka | [2024-04-26 08:22:03,926] INFO Created log for partition __consumer_offsets-33 in /var/lib/kafka/data/__consumer_offsets-33 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 08:24:02 policy-pap | [2024-04-26T08:22:04.557+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-db954cd2-8764-4a44-90af-3bb7f2069f83-3, groupId=db954cd2-8764-4a44-90af-3bb7f2069f83] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) 08:24:02 policy-db-migrator | 109 0220-sequence.sql upgrade 0800 0900 2604240821330900u 1 2024-04-26 08:21:38 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.185204655Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions migration" duration=235.561µs 08:24:02 kafka | [2024-04-26 08:22:03,927] INFO [Partition __consumer_offsets-33 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-33 (kafka.cluster.Partition) 08:24:02 policy-pap | [2024-04-26T08:22:04.557+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-db954cd2-8764-4a44-90af-3bb7f2069f83-3, groupId=db954cd2-8764-4a44-90af-3bb7f2069f83] (Re-)joining group 08:24:02 policy-db-migrator | 110 0100-jpatoscapolicy_targets.sql upgrade 0900 1000 2604240821331000u 1 2024-04-26 08:21:38 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.189543818Z level=info msg="Executing migration" id="RBAC action name migrator" 08:24:02 kafka | [2024-04-26 08:22:03,927] INFO [Partition __consumer_offsets-33 broker=1] Log loaded for partition __consumer_offsets-33 with initial high watermark 0 (kafka.cluster.Partition) 08:24:02 policy-pap | [2024-04-26T08:22:07.512+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully joined group with generation Generation{generationId=1, memberId='consumer-policy-pap-4-7cd46dae-964c-49b4-95ec-bd835d00b3b4', protocol='range'} 08:24:02 policy-db-migrator | 111 0110-jpatoscapolicytype_targets.sql upgrade 0900 1000 2604240821331000u 1 2024-04-26 08:21:38 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.191040421Z level=info msg="Migration successfully executed" id="RBAC action name migrator" duration=1.495453ms 08:24:02 kafka | [2024-04-26 08:22:03,927] INFO [Broker id=1] Leader __consumer_offsets-33 with topic id Some(RfiyP89qRi-5ZTNhftzAtg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 08:24:02 policy-pap | [2024-04-26T08:22:07.519+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Finished assignment for group at generation 1: {consumer-policy-pap-4-7cd46dae-964c-49b4-95ec-bd835d00b3b4=Assignment(partitions=[policy-pdp-pap-0])} 08:24:02 policy-db-migrator | 112 0120-toscatrigger.sql upgrade 0900 1000 2604240821331000u 1 2024-04-26 08:21:38 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.197276247Z level=info msg="Executing migration" id="Add UID column to playlist" 08:24:02 kafka | [2024-04-26 08:22:04,044] INFO [LogLoader partition=__consumer_offsets-48, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 08:24:02 policy-pap | [2024-04-26T08:22:07.541+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully synced group in generation Generation{generationId=1, memberId='consumer-policy-pap-4-7cd46dae-964c-49b4-95ec-bd835d00b3b4', protocol='range'} 08:24:02 policy-db-migrator | 113 0130-jpatoscapolicytype_triggers.sql upgrade 0900 1000 2604240821331000u 1 2024-04-26 08:21:38 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.20877479Z level=info msg="Migration successfully executed" id="Add UID column to playlist" duration=11.492814ms 08:24:02 kafka | [2024-04-26 08:22:04,045] INFO Created log for partition __consumer_offsets-48 in /var/lib/kafka/data/__consumer_offsets-48 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 08:24:02 policy-pap | [2024-04-26T08:22:07.542+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) 08:24:02 policy-db-migrator | 114 0140-toscaparameter.sql upgrade 0900 1000 2604240821331000u 1 2024-04-26 08:21:38 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.215112471Z level=info msg="Executing migration" id="Update uid column values in playlist" 08:24:02 kafka | [2024-04-26 08:22:04,046] INFO [Partition __consumer_offsets-48 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-48 (kafka.cluster.Partition) 08:24:02 policy-pap | [2024-04-26T08:22:07.548+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Adding newly assigned partitions: policy-pdp-pap-0 08:24:02 policy-db-migrator | 115 0150-toscaproperty.sql upgrade 0900 1000 2604240821331000u 1 2024-04-26 08:21:38 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.215459277Z level=info msg="Migration successfully executed" id="Update uid column values in playlist" duration=348.516µs 08:24:02 kafka | [2024-04-26 08:22:04,046] INFO [Partition __consumer_offsets-48 broker=1] Log loaded for partition __consumer_offsets-48 with initial high watermark 0 (kafka.cluster.Partition) 08:24:02 policy-pap | [2024-04-26T08:22:07.562+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-db954cd2-8764-4a44-90af-3bb7f2069f83-3, groupId=db954cd2-8764-4a44-90af-3bb7f2069f83] Successfully joined group with generation Generation{generationId=1, memberId='consumer-db954cd2-8764-4a44-90af-3bb7f2069f83-3-7fda7423-43b4-4275-9f43-73657387fac9', protocol='range'} 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.21981225Z level=info msg="Executing migration" id="Add index for uid in playlist" 08:24:02 policy-db-migrator | 116 0160-jpapolicyaudit_pk.sql upgrade 0900 1000 2604240821331000u 1 2024-04-26 08:21:38 08:24:02 kafka | [2024-04-26 08:22:04,046] INFO [Broker id=1] Leader __consumer_offsets-48 with topic id Some(RfiyP89qRi-5ZTNhftzAtg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 08:24:02 policy-pap | [2024-04-26T08:22:07.562+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-db954cd2-8764-4a44-90af-3bb7f2069f83-3, groupId=db954cd2-8764-4a44-90af-3bb7f2069f83] Finished assignment for group at generation 1: {consumer-db954cd2-8764-4a44-90af-3bb7f2069f83-3-7fda7423-43b4-4275-9f43-73657387fac9=Assignment(partitions=[policy-pdp-pap-0])} 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.221060062Z level=info msg="Migration successfully executed" id="Add index for uid in playlist" duration=1.247552ms 08:24:02 kafka | [2024-04-26 08:22:04,055] INFO [LogLoader partition=__consumer_offsets-19, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 08:24:02 policy-pap | [2024-04-26T08:22:07.569+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-db954cd2-8764-4a44-90af-3bb7f2069f83-3, groupId=db954cd2-8764-4a44-90af-3bb7f2069f83] Successfully synced group in generation Generation{generationId=1, memberId='consumer-db954cd2-8764-4a44-90af-3bb7f2069f83-3-7fda7423-43b4-4275-9f43-73657387fac9', protocol='range'} 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.224234868Z level=info msg="Executing migration" id="update group index for alert rules" 08:24:02 policy-db-migrator | 117 0170-pdpstatistics_pk.sql upgrade 0900 1000 2604240821331000u 1 2024-04-26 08:21:38 08:24:02 kafka | [2024-04-26 08:22:04,056] INFO Created log for partition __consumer_offsets-19 in /var/lib/kafka/data/__consumer_offsets-19 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 08:24:02 policy-pap | [2024-04-26T08:22:07.569+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-db954cd2-8764-4a44-90af-3bb7f2069f83-3, groupId=db954cd2-8764-4a44-90af-3bb7f2069f83] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.224680669Z level=info msg="Migration successfully executed" id="update group index for alert rules" duration=446.391µs 08:24:02 policy-db-migrator | 118 0180-jpatoscanodetemplate_metadata.sql upgrade 0900 1000 2604240821331000u 1 2024-04-26 08:21:38 08:24:02 kafka | [2024-04-26 08:22:04,056] INFO [Partition __consumer_offsets-19 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-19 (kafka.cluster.Partition) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.229136308Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated migration" 08:24:02 kafka | [2024-04-26 08:22:04,056] INFO [Partition __consumer_offsets-19 broker=1] Log loaded for partition __consumer_offsets-19 with initial high watermark 0 (kafka.cluster.Partition) 08:24:02 policy-pap | [2024-04-26T08:22:07.570+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-db954cd2-8764-4a44-90af-3bb7f2069f83-3, groupId=db954cd2-8764-4a44-90af-3bb7f2069f83] Adding newly assigned partitions: policy-pdp-pap-0 08:24:02 policy-db-migrator | 119 0100-upgrade.sql upgrade 1000 1100 2604240821331100u 1 2024-04-26 08:21:39 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.229475004Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated migration" duration=338.386µs 08:24:02 kafka | [2024-04-26 08:22:04,056] INFO [Broker id=1] Leader __consumer_offsets-19 with topic id Some(RfiyP89qRi-5ZTNhftzAtg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 08:24:02 policy-pap | [2024-04-26T08:22:07.574+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-db954cd2-8764-4a44-90af-3bb7f2069f83-3, groupId=db954cd2-8764-4a44-90af-3bb7f2069f83] Found no committed offset for partition policy-pdp-pap-0 08:24:02 policy-db-migrator | 120 0100-jpapolicyaudit_renameuser.sql upgrade 1100 1200 2604240821331200u 1 2024-04-26 08:21:39 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.232961705Z level=info msg="Executing migration" id="admin only folder/dashboard permission" 08:24:02 kafka | [2024-04-26 08:22:04,064] INFO [LogLoader partition=__consumer_offsets-34, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 08:24:02 policy-pap | [2024-04-26T08:22:07.574+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Found no committed offset for partition policy-pdp-pap-0 08:24:02 policy-db-migrator | 121 0110-idx_tsidx1.sql upgrade 1100 1200 2604240821331200u 1 2024-04-26 08:21:39 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.233529163Z level=info msg="Migration successfully executed" id="admin only folder/dashboard permission" duration=567.328µs 08:24:02 kafka | [2024-04-26 08:22:04,065] INFO Created log for partition __consumer_offsets-34 in /var/lib/kafka/data/__consumer_offsets-34 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 08:24:02 policy-pap | [2024-04-26T08:22:07.593+00:00|INFO|SubscriptionState|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. 08:24:02 policy-db-migrator | 122 0120-audit_sequence.sql upgrade 1100 1200 2604240821331200u 1 2024-04-26 08:21:39 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.23795961Z level=info msg="Executing migration" id="add action column to seed_assignment" 08:24:02 kafka | [2024-04-26 08:22:04,065] INFO [Partition __consumer_offsets-34 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-34 (kafka.cluster.Partition) 08:24:02 policy-pap | [2024-04-26T08:22:07.593+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-db954cd2-8764-4a44-90af-3bb7f2069f83-3, groupId=db954cd2-8764-4a44-90af-3bb7f2069f83] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. 08:24:02 policy-db-migrator | 123 0130-statistics_sequence.sql upgrade 1100 1200 2604240821331200u 1 2024-04-26 08:21:39 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.250168268Z level=info msg="Migration successfully executed" id="add action column to seed_assignment" duration=12.210369ms 08:24:02 kafka | [2024-04-26 08:22:04,066] INFO [Partition __consumer_offsets-34 broker=1] Log loaded for partition __consumer_offsets-34 with initial high watermark 0 (kafka.cluster.Partition) 08:24:02 policy-pap | [2024-04-26T08:22:10.706+00:00|INFO|[/policy/pap/v1]|http-nio-6969-exec-3] Initializing Spring DispatcherServlet 'dispatcherServlet' 08:24:02 policy-db-migrator | 124 0100-pdpstatistics.sql upgrade 1200 1300 2604240821331300u 1 2024-04-26 08:21:39 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.255001635Z level=info msg="Executing migration" id="add scope column to seed_assignment" 08:24:02 kafka | [2024-04-26 08:22:04,066] INFO [Broker id=1] Leader __consumer_offsets-34 with topic id Some(RfiyP89qRi-5ZTNhftzAtg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 08:24:02 policy-pap | [2024-04-26T08:22:10.707+00:00|INFO|DispatcherServlet|http-nio-6969-exec-3] Initializing Servlet 'dispatcherServlet' 08:24:02 policy-db-migrator | 125 0110-jpapdpstatistics_enginestats.sql upgrade 1200 1300 2604240821331300u 1 2024-04-26 08:21:39 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.262918783Z level=info msg="Migration successfully executed" id="add scope column to seed_assignment" duration=7.916068ms 08:24:02 kafka | [2024-04-26 08:22:04,072] INFO [LogLoader partition=__consumer_offsets-4, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 08:24:02 policy-pap | [2024-04-26T08:22:10.709+00:00|INFO|DispatcherServlet|http-nio-6969-exec-3] Completed initialization in 2 ms 08:24:02 policy-db-migrator | 126 0120-statistics_sequence.sql upgrade 1200 1300 2604240821331300u 1 2024-04-26 08:21:39 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.267487897Z level=info msg="Executing migration" id="remove unique index builtin_role_role_name before nullable update" 08:24:02 kafka | [2024-04-26 08:22:04,073] INFO Created log for partition __consumer_offsets-4 in /var/lib/kafka/data/__consumer_offsets-4 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 08:24:02 policy-pap | [2024-04-26T08:22:24.724+00:00|INFO|OrderedServiceImpl|KAFKA-source-policy-pdp-pap] ***** OrderedServiceImpl implementers: 08:24:02 policy-db-migrator | policyadmin: OK @ 1300 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.268741008Z level=info msg="Migration successfully executed" id="remove unique index builtin_role_role_name before nullable update" duration=1.25285ms 08:24:02 kafka | [2024-04-26 08:22:04,073] INFO [Partition __consumer_offsets-4 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-4 (kafka.cluster.Partition) 08:24:02 policy-pap | [] 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.272735164Z level=info msg="Executing migration" id="update seed_assignment role_name column to nullable" 08:24:02 kafka | [2024-04-26 08:22:04,073] INFO [Partition __consumer_offsets-4 broker=1] Log loaded for partition __consumer_offsets-4 with initial high watermark 0 (kafka.cluster.Partition) 08:24:02 policy-pap | [2024-04-26T08:22:24.725+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.363629497Z level=info msg="Migration successfully executed" id="update seed_assignment role_name column to nullable" duration=90.882803ms 08:24:02 kafka | [2024-04-26 08:22:04,073] INFO [Broker id=1] Leader __consumer_offsets-4 with topic id Some(RfiyP89qRi-5ZTNhftzAtg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 08:24:02 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"7c7dae4d-eb28-477f-a313-371e5e410caf","timestampMs":1714119744682,"name":"apex-b183f0da-bf00-44a3-b3ae-398d8035a48d","pdpGroup":"defaultGroup"} 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.384348711Z level=info msg="Executing migration" id="add unique index builtin_role_name back" 08:24:02 kafka | [2024-04-26 08:22:04,080] INFO [LogLoader partition=__consumer_offsets-11, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 08:24:02 policy-pap | [2024-04-26T08:22:24.725+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.387553989Z level=info msg="Migration successfully executed" id="add unique index builtin_role_name back" duration=3.206217ms 08:24:02 kafka | [2024-04-26 08:22:04,081] INFO Created log for partition __consumer_offsets-11 in /var/lib/kafka/data/__consumer_offsets-11 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.394659697Z level=info msg="Executing migration" id="add unique index builtin_role_action_scope" 08:24:02 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"7c7dae4d-eb28-477f-a313-371e5e410caf","timestampMs":1714119744682,"name":"apex-b183f0da-bf00-44a3-b3ae-398d8035a48d","pdpGroup":"defaultGroup"} 08:24:02 kafka | [2024-04-26 08:22:04,081] INFO [Partition __consumer_offsets-11 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-11 (kafka.cluster.Partition) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.396245864Z level=info msg="Migration successfully executed" id="add unique index builtin_role_action_scope" duration=1.600438ms 08:24:02 policy-pap | [2024-04-26T08:22:24.735+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus 08:24:02 kafka | [2024-04-26 08:22:04,081] INFO [Partition __consumer_offsets-11 broker=1] Log loaded for partition __consumer_offsets-11 with initial high watermark 0 (kafka.cluster.Partition) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.39962204Z level=info msg="Executing migration" id="add primary key to seed_assigment" 08:24:02 policy-pap | [2024-04-26T08:22:24.810+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-b183f0da-bf00-44a3-b3ae-398d8035a48d PdpUpdate starting 08:24:02 kafka | [2024-04-26 08:22:04,082] INFO [Broker id=1] Leader __consumer_offsets-11 with topic id Some(RfiyP89qRi-5ZTNhftzAtg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 08:24:02 policy-pap | [2024-04-26T08:22:24.810+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-b183f0da-bf00-44a3-b3ae-398d8035a48d PdpUpdate starting listener 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.424431306Z level=info msg="Migration successfully executed" id="add primary key to seed_assigment" duration=24.809486ms 08:24:02 kafka | [2024-04-26 08:22:04,089] INFO [LogLoader partition=__consumer_offsets-26, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 08:24:02 policy-pap | [2024-04-26T08:22:24.811+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-b183f0da-bf00-44a3-b3ae-398d8035a48d PdpUpdate starting timer 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.42859655Z level=info msg="Executing migration" id="add origin column to seed_assignment" 08:24:02 kafka | [2024-04-26 08:22:04,089] INFO Created log for partition __consumer_offsets-26 in /var/lib/kafka/data/__consumer_offsets-26 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.435071537Z level=info msg="Migration successfully executed" id="add origin column to seed_assignment" duration=6.474227ms 08:24:02 policy-pap | [2024-04-26T08:22:24.812+00:00|INFO|TimerManager|KAFKA-source-policy-heartbeat] update timer registered Timer [name=c089a3a3-4fc1-43c0-a7be-21299199c004, expireMs=1714119774812] 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.437777369Z level=info msg="Executing migration" id="add origin to plugin seed_assignment" 08:24:02 policy-pap | [2024-04-26T08:22:24.813+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-b183f0da-bf00-44a3-b3ae-398d8035a48d PdpUpdate starting enqueue 08:24:02 kafka | [2024-04-26 08:22:04,090] INFO [Partition __consumer_offsets-26 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-26 (kafka.cluster.Partition) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.438099665Z level=info msg="Migration successfully executed" id="add origin to plugin seed_assignment" duration=321.836µs 08:24:02 policy-pap | [2024-04-26T08:22:24.813+00:00|INFO|TimerManager|Thread-9] update timer waiting 29999ms Timer [name=c089a3a3-4fc1-43c0-a7be-21299199c004, expireMs=1714119774812] 08:24:02 kafka | [2024-04-26 08:22:04,090] INFO [Partition __consumer_offsets-26 broker=1] Log loaded for partition __consumer_offsets-26 with initial high watermark 0 (kafka.cluster.Partition) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.441033838Z level=info msg="Executing migration" id="prevent seeding OnCall access" 08:24:02 policy-pap | [2024-04-26T08:22:24.814+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-b183f0da-bf00-44a3-b3ae-398d8035a48d PdpUpdate started 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.441364725Z level=info msg="Migration successfully executed" id="prevent seeding OnCall access" duration=329.907µs 08:24:02 kafka | [2024-04-26 08:22:04,090] INFO [Broker id=1] Leader __consumer_offsets-26 with topic id Some(RfiyP89qRi-5ZTNhftzAtg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 08:24:02 policy-pap | [2024-04-26T08:22:24.815+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.446002553Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated fixed migration" 08:24:02 kafka | [2024-04-26 08:22:04,098] INFO [LogLoader partition=__consumer_offsets-49, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 08:24:02 policy-pap | {"source":"pap-b4f6f8e5-f898-4e69-90e7-669877e7a07f","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"c089a3a3-4fc1-43c0-a7be-21299199c004","timestampMs":1714119744793,"name":"apex-b183f0da-bf00-44a3-b3ae-398d8035a48d","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.446353049Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated fixed migration" duration=350.207µs 08:24:02 kafka | [2024-04-26 08:22:04,099] INFO Created log for partition __consumer_offsets-49 in /var/lib/kafka/data/__consumer_offsets-49 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 08:24:02 policy-pap | [2024-04-26T08:22:24.845+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 08:24:02 kafka | [2024-04-26 08:22:04,099] INFO [Partition __consumer_offsets-49 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-49 (kafka.cluster.Partition) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.451596747Z level=info msg="Executing migration" id="managed folder permissions library panel actions migration" 08:24:02 policy-pap | {"source":"pap-b4f6f8e5-f898-4e69-90e7-669877e7a07f","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"c089a3a3-4fc1-43c0-a7be-21299199c004","timestampMs":1714119744793,"name":"apex-b183f0da-bf00-44a3-b3ae-398d8035a48d","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 08:24:02 kafka | [2024-04-26 08:22:04,099] INFO [Partition __consumer_offsets-49 broker=1] Log loaded for partition __consumer_offsets-49 with initial high watermark 0 (kafka.cluster.Partition) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.452258078Z level=info msg="Migration successfully executed" id="managed folder permissions library panel actions migration" duration=618.97µs 08:24:02 policy-pap | [2024-04-26T08:22:24.846+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 08:24:02 kafka | [2024-04-26 08:22:04,099] INFO [Broker id=1] Leader __consumer_offsets-49 with topic id Some(RfiyP89qRi-5ZTNhftzAtg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.458120336Z level=info msg="Executing migration" id="migrate external alertmanagers to datsourcse" 08:24:02 policy-pap | {"source":"pap-b4f6f8e5-f898-4e69-90e7-669877e7a07f","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"c089a3a3-4fc1-43c0-a7be-21299199c004","timestampMs":1714119744793,"name":"apex-b183f0da-bf00-44a3-b3ae-398d8035a48d","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 08:24:02 kafka | [2024-04-26 08:22:04,106] INFO [LogLoader partition=__consumer_offsets-39, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.458771528Z level=info msg="Migration successfully executed" id="migrate external alertmanagers to datsourcse" duration=651.992µs 08:24:02 policy-pap | [2024-04-26T08:22:24.846+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE 08:24:02 kafka | [2024-04-26 08:22:04,107] INFO Created log for partition __consumer_offsets-39 in /var/lib/kafka/data/__consumer_offsets-39 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 08:24:02 policy-pap | [2024-04-26T08:22:24.848+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE 08:24:02 kafka | [2024-04-26 08:22:04,107] INFO [Partition __consumer_offsets-39 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-39 (kafka.cluster.Partition) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.465115919Z level=info msg="Executing migration" id="create folder table" 08:24:02 policy-pap | [2024-04-26T08:22:24.872+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 08:24:02 kafka | [2024-04-26 08:22:04,107] INFO [Partition __consumer_offsets-39 broker=1] Log loaded for partition __consumer_offsets-39 with initial high watermark 0 (kafka.cluster.Partition) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.466263054Z level=info msg="Migration successfully executed" id="create folder table" duration=1.148386ms 08:24:02 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"c089a3a3-4fc1-43c0-a7be-21299199c004","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"9e21325e-0675-4fb1-917c-73db7541fd22","timestampMs":1714119744850,"name":"apex-b183f0da-bf00-44a3-b3ae-398d8035a48d","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 08:24:02 kafka | [2024-04-26 08:22:04,108] INFO [Broker id=1] Leader __consumer_offsets-39 with topic id Some(RfiyP89qRi-5ZTNhftzAtg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 08:24:02 policy-pap | [2024-04-26T08:22:24.872+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 08:24:02 kafka | [2024-04-26 08:22:04,115] INFO [LogLoader partition=__consumer_offsets-9, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.471299072Z level=info msg="Executing migration" id="Add index for parent_uid" 08:24:02 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"c089a3a3-4fc1-43c0-a7be-21299199c004","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"9e21325e-0675-4fb1-917c-73db7541fd22","timestampMs":1714119744850,"name":"apex-b183f0da-bf00-44a3-b3ae-398d8035a48d","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 08:24:02 kafka | [2024-04-26 08:22:04,116] INFO Created log for partition __consumer_offsets-9 in /var/lib/kafka/data/__consumer_offsets-9 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.472812526Z level=info msg="Migration successfully executed" id="Add index for parent_uid" duration=1.512724ms 08:24:02 policy-pap | [2024-04-26T08:22:24.875+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b183f0da-bf00-44a3-b3ae-398d8035a48d PdpUpdate stopping 08:24:02 kafka | [2024-04-26 08:22:04,116] INFO [Partition __consumer_offsets-9 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-9 (kafka.cluster.Partition) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.476070625Z level=info msg="Executing migration" id="Add unique index for folder.uid and folder.org_id" 08:24:02 policy-pap | [2024-04-26T08:22:24.875+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id c089a3a3-4fc1-43c0-a7be-21299199c004 08:24:02 kafka | [2024-04-26 08:22:04,116] INFO [Partition __consumer_offsets-9 broker=1] Log loaded for partition __consumer_offsets-9 with initial high watermark 0 (kafka.cluster.Partition) 08:24:02 policy-pap | [2024-04-26T08:22:24.876+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b183f0da-bf00-44a3-b3ae-398d8035a48d PdpUpdate stopping enqueue 08:24:02 kafka | [2024-04-26 08:22:04,117] INFO [Broker id=1] Leader __consumer_offsets-9 with topic id Some(RfiyP89qRi-5ZTNhftzAtg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.477415221Z level=info msg="Migration successfully executed" id="Add unique index for folder.uid and folder.org_id" duration=1.344325ms 08:24:02 policy-pap | [2024-04-26T08:22:24.876+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b183f0da-bf00-44a3-b3ae-398d8035a48d PdpUpdate stopping timer 08:24:02 kafka | [2024-04-26 08:22:04,123] INFO [LogLoader partition=__consumer_offsets-24, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.485407953Z level=info msg="Executing migration" id="Update folder title length" 08:24:02 policy-pap | [2024-04-26T08:22:24.877+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=c089a3a3-4fc1-43c0-a7be-21299199c004, expireMs=1714119774812] 08:24:02 kafka | [2024-04-26 08:22:04,123] INFO Created log for partition __consumer_offsets-24 in /var/lib/kafka/data/__consumer_offsets-24 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.485733618Z level=info msg="Migration successfully executed" id="Update folder title length" duration=323.576µs 08:24:02 policy-pap | [2024-04-26T08:22:24.877+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b183f0da-bf00-44a3-b3ae-398d8035a48d PdpUpdate stopping listener 08:24:02 kafka | [2024-04-26 08:22:04,124] INFO [Partition __consumer_offsets-24 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-24 (kafka.cluster.Partition) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.490876201Z level=info msg="Executing migration" id="Add unique index for folder.title and folder.parent_uid" 08:24:02 policy-pap | [2024-04-26T08:22:24.877+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b183f0da-bf00-44a3-b3ae-398d8035a48d PdpUpdate stopped 08:24:02 kafka | [2024-04-26 08:22:04,124] INFO [Partition __consumer_offsets-24 broker=1] Log loaded for partition __consumer_offsets-24 with initial high watermark 0 (kafka.cluster.Partition) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.492311341Z level=info msg="Migration successfully executed" id="Add unique index for folder.title and folder.parent_uid" duration=1.43623ms 08:24:02 policy-pap | [2024-04-26T08:22:24.880+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 08:24:02 kafka | [2024-04-26 08:22:04,124] INFO [Broker id=1] Leader __consumer_offsets-24 with topic id Some(RfiyP89qRi-5ZTNhftzAtg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.496197241Z level=info msg="Executing migration" id="Remove unique index for folder.title and folder.parent_uid" 08:24:02 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"46151d45-9ff7-40dd-999c-96d4f36448f0","timestampMs":1714119744849,"name":"apex-b183f0da-bf00-44a3-b3ae-398d8035a48d","pdpGroup":"defaultGroup"} 08:24:02 kafka | [2024-04-26 08:22:04,130] INFO [LogLoader partition=__consumer_offsets-31, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.497369069Z level=info msg="Migration successfully executed" id="Remove unique index for folder.title and folder.parent_uid" duration=1.171818ms 08:24:02 policy-pap | [2024-04-26T08:22:24.882+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-b183f0da-bf00-44a3-b3ae-398d8035a48d PdpUpdate successful 08:24:02 kafka | [2024-04-26 08:22:04,131] INFO Created log for partition __consumer_offsets-31 in /var/lib/kafka/data/__consumer_offsets-31 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.501394546Z level=info msg="Executing migration" id="Add unique index for title, parent_uid, and org_id" 08:24:02 policy-pap | [2024-04-26T08:22:24.882+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-b183f0da-bf00-44a3-b3ae-398d8035a48d start publishing next request 08:24:02 kafka | [2024-04-26 08:22:04,131] INFO [Partition __consumer_offsets-31 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-31 (kafka.cluster.Partition) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.502747522Z level=info msg="Migration successfully executed" id="Add unique index for title, parent_uid, and org_id" duration=1.352246ms 08:24:02 policy-pap | [2024-04-26T08:22:24.882+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b183f0da-bf00-44a3-b3ae-398d8035a48d PdpStateChange starting 08:24:02 kafka | [2024-04-26 08:22:04,131] INFO [Partition __consumer_offsets-31 broker=1] Log loaded for partition __consumer_offsets-31 with initial high watermark 0 (kafka.cluster.Partition) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.50597296Z level=info msg="Executing migration" id="Sync dashboard and folder table" 08:24:02 policy-pap | [2024-04-26T08:22:24.882+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b183f0da-bf00-44a3-b3ae-398d8035a48d PdpStateChange starting listener 08:24:02 kafka | [2024-04-26 08:22:04,131] INFO [Broker id=1] Leader __consumer_offsets-31 with topic id Some(RfiyP89qRi-5ZTNhftzAtg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.506520547Z level=info msg="Migration successfully executed" id="Sync dashboard and folder table" duration=547.127µs 08:24:02 policy-pap | [2024-04-26T08:22:24.882+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b183f0da-bf00-44a3-b3ae-398d8035a48d PdpStateChange starting timer 08:24:02 kafka | [2024-04-26 08:22:04,139] INFO [LogLoader partition=__consumer_offsets-46, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.509813969Z level=info msg="Executing migration" id="Remove ghost folders from the folder table" 08:24:02 policy-pap | [2024-04-26T08:22:24.882+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer registered Timer [name=a5f1d0a2-79e5-4903-b04d-2fc825203dbc, expireMs=1714119774882] 08:24:02 kafka | [2024-04-26 08:22:04,140] INFO Created log for partition __consumer_offsets-46 in /var/lib/kafka/data/__consumer_offsets-46 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.510201657Z level=info msg="Migration successfully executed" id="Remove ghost folders from the folder table" duration=386.509µs 08:24:02 policy-pap | [2024-04-26T08:22:24.883+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b183f0da-bf00-44a3-b3ae-398d8035a48d PdpStateChange starting enqueue 08:24:02 kafka | [2024-04-26 08:22:04,140] INFO [Partition __consumer_offsets-46 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-46 (kafka.cluster.Partition) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.513437976Z level=info msg="Executing migration" id="Remove unique index UQE_folder_uid_org_id" 08:24:02 policy-pap | [2024-04-26T08:22:24.883+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b183f0da-bf00-44a3-b3ae-398d8035a48d PdpStateChange started 08:24:02 kafka | [2024-04-26 08:22:04,140] INFO [Partition __consumer_offsets-46 broker=1] Log loaded for partition __consumer_offsets-46 with initial high watermark 0 (kafka.cluster.Partition) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.514900187Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_uid_org_id" duration=1.461921ms 08:24:02 policy-pap | [2024-04-26T08:22:24.883+00:00|INFO|TimerManager|Thread-10] state-change timer waiting 29999ms Timer [name=a5f1d0a2-79e5-4903-b04d-2fc825203dbc, expireMs=1714119774882] 08:24:02 kafka | [2024-04-26 08:22:04,140] INFO [Broker id=1] Leader __consumer_offsets-46 with topic id Some(RfiyP89qRi-5ZTNhftzAtg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.519161406Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_uid" 08:24:02 policy-pap | [2024-04-26T08:22:24.884+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 08:24:02 kafka | [2024-04-26 08:22:04,155] INFO [LogLoader partition=__consumer_offsets-1, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.521446028Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_uid" duration=2.283292ms 08:24:02 policy-pap | {"source":"pap-b4f6f8e5-f898-4e69-90e7-669877e7a07f","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"a5f1d0a2-79e5-4903-b04d-2fc825203dbc","timestampMs":1714119744793,"name":"apex-b183f0da-bf00-44a3-b3ae-398d8035a48d","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 08:24:02 kafka | [2024-04-26 08:22:04,156] INFO Created log for partition __consumer_offsets-1 in /var/lib/kafka/data/__consumer_offsets-1 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.525486256Z level=info msg="Executing migration" id="Remove unique index UQE_folder_title_parent_uid_org_id" 08:24:02 policy-pap | [2024-04-26T08:22:24.914+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.527435322Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_title_parent_uid_org_id" duration=1.950646ms 08:24:02 kafka | [2024-04-26 08:22:04,157] INFO [Partition __consumer_offsets-1 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-1 (kafka.cluster.Partition) 08:24:02 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"46151d45-9ff7-40dd-999c-96d4f36448f0","timestampMs":1714119744849,"name":"apex-b183f0da-bf00-44a3-b3ae-398d8035a48d","pdpGroup":"defaultGroup"} 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.532489589Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_parent_uid_title" 08:24:02 kafka | [2024-04-26 08:22:04,157] INFO [Partition __consumer_offsets-1 broker=1] Log loaded for partition __consumer_offsets-1 with initial high watermark 0 (kafka.cluster.Partition) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.533743991Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_parent_uid_title" duration=1.253712ms 08:24:02 policy-pap | [2024-04-26T08:22:24.915+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus 08:24:02 kafka | [2024-04-26 08:22:04,158] INFO [Broker id=1] Leader __consumer_offsets-1 with topic id Some(RfiyP89qRi-5ZTNhftzAtg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 08:24:02 policy-pap | [2024-04-26T08:22:24.916+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.53740057Z level=info msg="Executing migration" id="Remove index IDX_folder_parent_uid_org_id" 08:24:02 kafka | [2024-04-26 08:22:04,163] INFO [LogLoader partition=__consumer_offsets-16, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.538895493Z level=info msg="Migration successfully executed" id="Remove index IDX_folder_parent_uid_org_id" duration=1.494923ms 08:24:02 policy-pap | {"source":"pap-b4f6f8e5-f898-4e69-90e7-669877e7a07f","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"a5f1d0a2-79e5-4903-b04d-2fc825203dbc","timestampMs":1714119744793,"name":"apex-b183f0da-bf00-44a3-b3ae-398d8035a48d","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 08:24:02 kafka | [2024-04-26 08:22:04,164] INFO Created log for partition __consumer_offsets-16 in /var/lib/kafka/data/__consumer_offsets-16 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.541804145Z level=info msg="Executing migration" id="create anon_device table" 08:24:02 kafka | [2024-04-26 08:22:04,164] INFO [Partition __consumer_offsets-16 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-16 (kafka.cluster.Partition) 08:24:02 policy-pap | [2024-04-26T08:22:24.917+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATE_CHANGE 08:24:02 kafka | [2024-04-26 08:22:04,164] INFO [Partition __consumer_offsets-16 broker=1] Log loaded for partition __consumer_offsets-16 with initial high watermark 0 (kafka.cluster.Partition) 08:24:02 policy-pap | [2024-04-26T08:22:24.917+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.543087108Z level=info msg="Migration successfully executed" id="create anon_device table" duration=1.282733ms 08:24:02 kafka | [2024-04-26 08:22:04,164] INFO [Broker id=1] Leader __consumer_offsets-16 with topic id Some(RfiyP89qRi-5ZTNhftzAtg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 08:24:02 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"a5f1d0a2-79e5-4903-b04d-2fc825203dbc","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"a7018436-53c5-4d20-9150-d26e4bf63ebb","timestampMs":1714119744897,"name":"apex-b183f0da-bf00-44a3-b3ae-398d8035a48d","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.548695673Z level=info msg="Executing migration" id="add unique index anon_device.device_id" 08:24:02 kafka | [2024-04-26 08:22:04,169] INFO [LogLoader partition=__consumer_offsets-2, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 08:24:02 policy-pap | [2024-04-26T08:22:24.930+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b183f0da-bf00-44a3-b3ae-398d8035a48d PdpStateChange stopping 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.549986126Z level=info msg="Migration successfully executed" id="add unique index anon_device.device_id" duration=1.290353ms 08:24:02 kafka | [2024-04-26 08:22:04,170] INFO Created log for partition __consumer_offsets-2 in /var/lib/kafka/data/__consumer_offsets-2 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 08:24:02 policy-pap | [2024-04-26T08:22:24.931+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b183f0da-bf00-44a3-b3ae-398d8035a48d PdpStateChange stopping enqueue 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.554043775Z level=info msg="Executing migration" id="add index anon_device.updated_at" 08:24:02 kafka | [2024-04-26 08:22:04,170] INFO [Partition __consumer_offsets-2 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-2 (kafka.cluster.Partition) 08:24:02 policy-pap | [2024-04-26T08:22:24.931+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b183f0da-bf00-44a3-b3ae-398d8035a48d PdpStateChange stopping timer 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.55596972Z level=info msg="Migration successfully executed" id="add index anon_device.updated_at" duration=1.925905ms 08:24:02 kafka | [2024-04-26 08:22:04,170] INFO [Partition __consumer_offsets-2 broker=1] Log loaded for partition __consumer_offsets-2 with initial high watermark 0 (kafka.cluster.Partition) 08:24:02 policy-pap | [2024-04-26T08:22:24.931+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer cancelled Timer [name=a5f1d0a2-79e5-4903-b04d-2fc825203dbc, expireMs=1714119774882] 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.561257908Z level=info msg="Executing migration" id="create signing_key table" 08:24:02 kafka | [2024-04-26 08:22:04,170] INFO [Broker id=1] Leader __consumer_offsets-2 with topic id Some(RfiyP89qRi-5ZTNhftzAtg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.562600604Z level=info msg="Migration successfully executed" id="create signing_key table" duration=1.343426ms 08:24:02 kafka | [2024-04-26 08:22:04,177] INFO [LogLoader partition=__consumer_offsets-25, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 08:24:02 policy-pap | [2024-04-26T08:22:24.931+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b183f0da-bf00-44a3-b3ae-398d8035a48d PdpStateChange stopping listener 08:24:02 kafka | [2024-04-26 08:22:04,177] INFO Created log for partition __consumer_offsets-25 in /var/lib/kafka/data/__consumer_offsets-25 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.567127246Z level=info msg="Executing migration" id="add unique index signing_key.key_id" 08:24:02 policy-pap | [2024-04-26T08:22:24.931+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b183f0da-bf00-44a3-b3ae-398d8035a48d PdpStateChange stopped 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.568548336Z level=info msg="Migration successfully executed" id="add unique index signing_key.key_id" duration=1.41926ms 08:24:02 kafka | [2024-04-26 08:22:04,177] INFO [Partition __consumer_offsets-25 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-25 (kafka.cluster.Partition) 08:24:02 policy-pap | [2024-04-26T08:22:24.931+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-b183f0da-bf00-44a3-b3ae-398d8035a48d PdpStateChange successful 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.571985144Z level=info msg="Executing migration" id="set legacy alert migration status in kvstore" 08:24:02 kafka | [2024-04-26 08:22:04,177] INFO [Partition __consumer_offsets-25 broker=1] Log loaded for partition __consumer_offsets-25 with initial high watermark 0 (kafka.cluster.Partition) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.573181542Z level=info msg="Migration successfully executed" id="set legacy alert migration status in kvstore" duration=1.196279ms 08:24:02 policy-pap | [2024-04-26T08:22:24.931+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-b183f0da-bf00-44a3-b3ae-398d8035a48d start publishing next request 08:24:02 kafka | [2024-04-26 08:22:04,178] INFO [Broker id=1] Leader __consumer_offsets-25 with topic id Some(RfiyP89qRi-5ZTNhftzAtg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.577274973Z level=info msg="Executing migration" id="migrate record of created folders during legacy migration to kvstore" 08:24:02 kafka | [2024-04-26 08:22:04,185] INFO [LogLoader partition=__consumer_offsets-40, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 08:24:02 policy-pap | [2024-04-26T08:22:24.931+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b183f0da-bf00-44a3-b3ae-398d8035a48d PdpUpdate starting 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.577708215Z level=info msg="Migration successfully executed" id="migrate record of created folders during legacy migration to kvstore" duration=433.882µs 08:24:02 kafka | [2024-04-26 08:22:04,185] INFO Created log for partition __consumer_offsets-40 in /var/lib/kafka/data/__consumer_offsets-40 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 08:24:02 policy-pap | [2024-04-26T08:22:24.932+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b183f0da-bf00-44a3-b3ae-398d8035a48d PdpUpdate starting listener 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.581074569Z level=info msg="Executing migration" id="Add folder_uid for dashboard" 08:24:02 kafka | [2024-04-26 08:22:04,185] INFO [Partition __consumer_offsets-40 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-40 (kafka.cluster.Partition) 08:24:02 policy-pap | [2024-04-26T08:22:24.932+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b183f0da-bf00-44a3-b3ae-398d8035a48d PdpUpdate starting timer 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.590668379Z level=info msg="Migration successfully executed" id="Add folder_uid for dashboard" duration=9.59319ms 08:24:02 kafka | [2024-04-26 08:22:04,186] INFO [Partition __consumer_offsets-40 broker=1] Log loaded for partition __consumer_offsets-40 with initial high watermark 0 (kafka.cluster.Partition) 08:24:02 policy-pap | [2024-04-26T08:22:24.932+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer registered Timer [name=4a11c98e-6791-453e-9808-0827aeaec0c3, expireMs=1714119774932] 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.59414905Z level=info msg="Executing migration" id="Populate dashboard folder_uid column" 08:24:02 kafka | [2024-04-26 08:22:04,186] INFO [Broker id=1] Leader __consumer_offsets-40 with topic id Some(RfiyP89qRi-5ZTNhftzAtg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 08:24:02 policy-pap | [2024-04-26T08:22:24.932+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b183f0da-bf00-44a3-b3ae-398d8035a48d PdpUpdate starting enqueue 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.595180531Z level=info msg="Migration successfully executed" id="Populate dashboard folder_uid column" duration=1.032401ms 08:24:02 kafka | [2024-04-26 08:22:04,193] INFO [LogLoader partition=__consumer_offsets-47, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 08:24:02 policy-pap | [2024-04-26T08:22:24.932+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b183f0da-bf00-44a3-b3ae-398d8035a48d PdpUpdate started 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.600138594Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title" 08:24:02 kafka | [2024-04-26 08:22:04,198] INFO Created log for partition __consumer_offsets-47 in /var/lib/kafka/data/__consumer_offsets-47 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 08:24:02 policy-pap | [2024-04-26T08:22:24.933+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.602254697Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title" duration=2.115494ms 08:24:02 kafka | [2024-04-26 08:22:04,198] INFO [Partition __consumer_offsets-47 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-47 (kafka.cluster.Partition) 08:24:02 policy-pap | {"source":"pap-b4f6f8e5-f898-4e69-90e7-669877e7a07f","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"4a11c98e-6791-453e-9808-0827aeaec0c3","timestampMs":1714119744908,"name":"apex-b183f0da-bf00-44a3-b3ae-398d8035a48d","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.606685874Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_id_title" 08:24:02 kafka | [2024-04-26 08:22:04,198] INFO [Partition __consumer_offsets-47 broker=1] Log loaded for partition __consumer_offsets-47 with initial high watermark 0 (kafka.cluster.Partition) 08:24:02 policy-pap | [2024-04-26T08:22:24.937+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.6078154Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_id_title" duration=1.129476ms 08:24:02 kafka | [2024-04-26 08:22:04,198] INFO [Broker id=1] Leader __consumer_offsets-47 with topic id Some(RfiyP89qRi-5ZTNhftzAtg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 08:24:02 policy-pap | {"source":"pap-b4f6f8e5-f898-4e69-90e7-669877e7a07f","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"a5f1d0a2-79e5-4903-b04d-2fc825203dbc","timestampMs":1714119744793,"name":"apex-b183f0da-bf00-44a3-b3ae-398d8035a48d","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.612084619Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_uid_title" 08:24:02 kafka | [2024-04-26 08:22:04,209] INFO [LogLoader partition=__consumer_offsets-17, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 08:24:02 policy-pap | [2024-04-26T08:22:24.937+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_STATE_CHANGE 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.613222624Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_uid_title" duration=1.137765ms 08:24:02 kafka | [2024-04-26 08:22:04,210] INFO Created log for partition __consumer_offsets-17 in /var/lib/kafka/data/__consumer_offsets-17 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 08:24:02 policy-pap | [2024-04-26T08:22:24.942+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.617757937Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" 08:24:02 kafka | [2024-04-26 08:22:04,210] INFO [Partition __consumer_offsets-17 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-17 (kafka.cluster.Partition) 08:24:02 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"a5f1d0a2-79e5-4903-b04d-2fc825203dbc","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"a7018436-53c5-4d20-9150-d26e4bf63ebb","timestampMs":1714119744897,"name":"apex-b183f0da-bf00-44a3-b3ae-398d8035a48d","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.619083262Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" duration=1.324855ms 08:24:02 kafka | [2024-04-26 08:22:04,210] INFO [Partition __consumer_offsets-17 broker=1] Log loaded for partition __consumer_offsets-17 with initial high watermark 0 (kafka.cluster.Partition) 08:24:02 policy-pap | [2024-04-26T08:22:24.943+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.622662687Z level=info msg="Executing migration" id="Restore index for dashboard_org_id_folder_id_title" 08:24:02 kafka | [2024-04-26 08:22:04,210] INFO [Broker id=1] Leader __consumer_offsets-17 with topic id Some(RfiyP89qRi-5ZTNhftzAtg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 08:24:02 policy-pap | {"source":"pap-b4f6f8e5-f898-4e69-90e7-669877e7a07f","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"4a11c98e-6791-453e-9808-0827aeaec0c3","timestampMs":1714119744908,"name":"apex-b183f0da-bf00-44a3-b3ae-398d8035a48d","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.625123387Z level=info msg="Migration successfully executed" id="Restore index for dashboard_org_id_folder_id_title" duration=2.46005ms 08:24:02 kafka | [2024-04-26 08:22:04,217] INFO [LogLoader partition=__consumer_offsets-32, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 08:24:02 policy-pap | [2024-04-26T08:22:24.943+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id a5f1d0a2-79e5-4903-b04d-2fc825203dbc 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.630632967Z level=info msg="Executing migration" id="create sso_setting table" 08:24:02 policy-pap | [2024-04-26T08:22:24.943+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE 08:24:02 kafka | [2024-04-26 08:22:04,217] INFO Created log for partition __consumer_offsets-32 in /var/lib/kafka/data/__consumer_offsets-32 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.632003134Z level=info msg="Migration successfully executed" id="create sso_setting table" duration=1.370757ms 08:24:02 policy-pap | [2024-04-26T08:22:24.945+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 08:24:02 kafka | [2024-04-26 08:22:04,217] INFO [Partition __consumer_offsets-32 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-32 (kafka.cluster.Partition) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.645533678Z level=info msg="Executing migration" id="copy kvstore migration status to each org" 08:24:02 policy-pap | {"source":"pap-b4f6f8e5-f898-4e69-90e7-669877e7a07f","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"4a11c98e-6791-453e-9808-0827aeaec0c3","timestampMs":1714119744908,"name":"apex-b183f0da-bf00-44a3-b3ae-398d8035a48d","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 08:24:02 kafka | [2024-04-26 08:22:04,218] INFO [Partition __consumer_offsets-32 broker=1] Log loaded for partition __consumer_offsets-32 with initial high watermark 0 (kafka.cluster.Partition) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.646762907Z level=info msg="Migration successfully executed" id="copy kvstore migration status to each org" duration=1.23375ms 08:24:02 policy-pap | [2024-04-26T08:22:24.945+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE 08:24:02 kafka | [2024-04-26 08:22:04,218] INFO [Broker id=1] Leader __consumer_offsets-32 with topic id Some(RfiyP89qRi-5ZTNhftzAtg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.65008841Z level=info msg="Executing migration" id="add back entry for orgid=0 migrated status" 08:24:02 policy-pap | [2024-04-26T08:22:24.951+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 08:24:02 kafka | [2024-04-26 08:22:04,224] INFO [LogLoader partition=__consumer_offsets-37, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.650444358Z level=info msg="Migration successfully executed" id="add back entry for orgid=0 migrated status" duration=356.668µs 08:24:02 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"4a11c98e-6791-453e-9808-0827aeaec0c3","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"e3874a72-08db-45fe-aceb-34c903ea4e7e","timestampMs":1714119744944,"name":"apex-b183f0da-bf00-44a3-b3ae-398d8035a48d","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 08:24:02 kafka | [2024-04-26 08:22:04,224] INFO Created log for partition __consumer_offsets-37 in /var/lib/kafka/data/__consumer_offsets-37 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.653134119Z level=info msg="Executing migration" id="alter kv_store.value to longtext" 08:24:02 policy-pap | [2024-04-26T08:22:24.952+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b183f0da-bf00-44a3-b3ae-398d8035a48d PdpUpdate stopping 08:24:02 kafka | [2024-04-26 08:22:04,224] INFO [Partition __consumer_offsets-37 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-37 (kafka.cluster.Partition) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.653222134Z level=info msg="Migration successfully executed" id="alter kv_store.value to longtext" duration=88.245µs 08:24:02 policy-pap | [2024-04-26T08:22:24.952+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b183f0da-bf00-44a3-b3ae-398d8035a48d PdpUpdate stopping enqueue 08:24:02 kafka | [2024-04-26 08:22:04,224] INFO [Partition __consumer_offsets-37 broker=1] Log loaded for partition __consumer_offsets-37 with initial high watermark 0 (kafka.cluster.Partition) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.6568091Z level=info msg="Executing migration" id="add notification_settings column to alert_rule table" 08:24:02 policy-pap | [2024-04-26T08:22:24.952+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b183f0da-bf00-44a3-b3ae-398d8035a48d PdpUpdate stopping timer 08:24:02 kafka | [2024-04-26 08:22:04,224] INFO [Broker id=1] Leader __consumer_offsets-37 with topic id Some(RfiyP89qRi-5ZTNhftzAtg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.66907527Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule table" duration=12.26629ms 08:24:02 policy-pap | [2024-04-26T08:22:24.952+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=4a11c98e-6791-453e-9808-0827aeaec0c3, expireMs=1714119774932] 08:24:02 kafka | [2024-04-26 08:22:04,232] INFO [LogLoader partition=__consumer_offsets-7, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.672224425Z level=info msg="Executing migration" id="add notification_settings column to alert_rule_version table" 08:24:02 policy-pap | [2024-04-26T08:22:24.952+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b183f0da-bf00-44a3-b3ae-398d8035a48d PdpUpdate stopping listener 08:24:02 kafka | [2024-04-26 08:22:04,232] INFO Created log for partition __consumer_offsets-7 in /var/lib/kafka/data/__consumer_offsets-7 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.681798994Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule_version table" duration=9.573568ms 08:24:02 kafka | [2024-04-26 08:22:04,232] INFO [Partition __consumer_offsets-7 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-7 (kafka.cluster.Partition) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.68438247Z level=info msg="Executing migration" id="removing scope from alert.instances:read action migration" 08:24:02 policy-pap | [2024-04-26T08:22:24.952+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-b183f0da-bf00-44a3-b3ae-398d8035a48d PdpUpdate stopped 08:24:02 kafka | [2024-04-26 08:22:04,232] INFO [Partition __consumer_offsets-7 broker=1] Log loaded for partition __consumer_offsets-7 with initial high watermark 0 (kafka.cluster.Partition) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.684803661Z level=info msg="Migration successfully executed" id="removing scope from alert.instances:read action migration" duration=420.751µs 08:24:02 policy-pap | [2024-04-26T08:22:24.956+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 08:24:02 kafka | [2024-04-26 08:22:04,232] INFO [Broker id=1] Leader __consumer_offsets-7 with topic id Some(RfiyP89qRi-5ZTNhftzAtg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 08:24:02 grafana | logger=migrator t=2024-04-26T08:21:37.687784487Z level=info msg="migrations completed" performed=548 skipped=0 duration=4.128467893s 08:24:02 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"4a11c98e-6791-453e-9808-0827aeaec0c3","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"e3874a72-08db-45fe-aceb-34c903ea4e7e","timestampMs":1714119744944,"name":"apex-b183f0da-bf00-44a3-b3ae-398d8035a48d","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 08:24:02 kafka | [2024-04-26 08:22:04,240] INFO [LogLoader partition=__consumer_offsets-22, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 08:24:02 grafana | logger=sqlstore t=2024-04-26T08:21:37.698292802Z level=info msg="Created default admin" user=admin 08:24:02 policy-pap | [2024-04-26T08:22:24.956+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-b183f0da-bf00-44a3-b3ae-398d8035a48d PdpUpdate successful 08:24:02 kafka | [2024-04-26 08:22:04,241] INFO Created log for partition __consumer_offsets-22 in /var/lib/kafka/data/__consumer_offsets-22 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 08:24:02 grafana | logger=sqlstore t=2024-04-26T08:21:37.698713662Z level=info msg="Created default organization" 08:24:02 policy-pap | [2024-04-26T08:22:24.956+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 4a11c98e-6791-453e-9808-0827aeaec0c3 08:24:02 kafka | [2024-04-26 08:22:04,241] INFO [Partition __consumer_offsets-22 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-22 (kafka.cluster.Partition) 08:24:02 grafana | logger=secrets t=2024-04-26T08:21:37.706891283Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1 08:24:02 policy-pap | [2024-04-26T08:22:24.956+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-b183f0da-bf00-44a3-b3ae-398d8035a48d has no more requests 08:24:02 kafka | [2024-04-26 08:22:04,241] INFO [Partition __consumer_offsets-22 broker=1] Log loaded for partition __consumer_offsets-22 with initial high watermark 0 (kafka.cluster.Partition) 08:24:02 grafana | logger=plugin.store t=2024-04-26T08:21:37.730472409Z level=info msg="Loading plugins..." 08:24:02 policy-pap | [2024-04-26T08:22:31.170+00:00|WARN|NonInjectionManager|pool-2-thread-1] Falling back to injection-less client. 08:24:02 kafka | [2024-04-26 08:22:04,241] INFO [Broker id=1] Leader __consumer_offsets-22 with topic id Some(RfiyP89qRi-5ZTNhftzAtg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 08:24:02 grafana | logger=local.finder t=2024-04-26T08:21:37.775970428Z level=warn msg="Skipping finding plugins as directory does not exist" path=/usr/share/grafana/plugins-bundled 08:24:02 policy-pap | [2024-04-26T08:22:31.214+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls 08:24:02 kafka | [2024-04-26 08:22:04,247] INFO [LogLoader partition=__consumer_offsets-29, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 08:24:02 grafana | logger=plugin.store t=2024-04-26T08:21:37.776003019Z level=info msg="Plugins loaded" count=55 duration=45.529231ms 08:24:02 policy-pap | [2024-04-26T08:22:31.221+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls 08:24:02 kafka | [2024-04-26 08:22:04,248] INFO Created log for partition __consumer_offsets-29 in /var/lib/kafka/data/__consumer_offsets-29 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 08:24:02 grafana | logger=query_data t=2024-04-26T08:21:37.782303128Z level=info msg="Query Service initialization" 08:24:02 policy-pap | [2024-04-26T08:22:31.225+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls 08:24:02 kafka | [2024-04-26 08:22:04,248] INFO [Partition __consumer_offsets-29 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-29 (kafka.cluster.Partition) 08:24:02 grafana | logger=live.push_http t=2024-04-26T08:21:37.792896307Z level=info msg="Live Push Gateway initialization" 08:24:02 policy-pap | [2024-04-26T08:22:31.634+00:00|INFO|SessionData|http-nio-6969-exec-6] unknown group testGroup 08:24:02 kafka | [2024-04-26 08:22:04,248] INFO [Partition __consumer_offsets-29 broker=1] Log loaded for partition __consumer_offsets-29 with initial high watermark 0 (kafka.cluster.Partition) 08:24:02 grafana | logger=ngalert.migration t=2024-04-26T08:21:37.79949897Z level=info msg=Starting 08:24:02 policy-pap | [2024-04-26T08:22:32.141+00:00|INFO|SessionData|http-nio-6969-exec-6] create cached group testGroup 08:24:02 kafka | [2024-04-26 08:22:04,248] INFO [Broker id=1] Leader __consumer_offsets-29 with topic id Some(RfiyP89qRi-5ZTNhftzAtg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 08:24:02 policy-pap | [2024-04-26T08:22:32.142+00:00|INFO|SessionData|http-nio-6969-exec-6] creating DB group testGroup 08:24:02 grafana | logger=ngalert.migration t=2024-04-26T08:21:37.799969403Z level=info msg="Applying transition" currentType=Legacy desiredType=UnifiedAlerting cleanOnDowngrade=false cleanOnUpgrade=false 08:24:02 kafka | [2024-04-26 08:22:04,255] INFO [LogLoader partition=__consumer_offsets-44, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 08:24:02 policy-pap | [2024-04-26T08:22:32.694+00:00|INFO|SessionData|http-nio-6969-exec-10] cache group testGroup 08:24:02 grafana | logger=ngalert.migration orgID=1 t=2024-04-26T08:21:37.800410115Z level=info msg="Migrating alerts for organisation" 08:24:02 kafka | [2024-04-26 08:22:04,255] INFO Created log for partition __consumer_offsets-44 in /var/lib/kafka/data/__consumer_offsets-44 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 08:24:02 policy-pap | [2024-04-26T08:22:32.913+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-10] Registering a deploy for policy onap.restart.tca 1.0.0 08:24:02 grafana | logger=ngalert.migration orgID=1 t=2024-04-26T08:21:37.801090358Z level=info msg="Alerts found to migrate" alerts=0 08:24:02 kafka | [2024-04-26 08:22:04,255] INFO [Partition __consumer_offsets-44 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-44 (kafka.cluster.Partition) 08:24:02 policy-pap | [2024-04-26T08:22:33.021+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-10] Registering a deploy for policy operational.apex.decisionMaker 1.0.0 08:24:02 grafana | logger=ngalert.migration t=2024-04-26T08:21:37.803026902Z level=info msg="Completed alerting migration" 08:24:02 kafka | [2024-04-26 08:22:04,256] INFO [Partition __consumer_offsets-44 broker=1] Log loaded for partition __consumer_offsets-44 with initial high watermark 0 (kafka.cluster.Partition) 08:24:02 policy-pap | [2024-04-26T08:22:33.021+00:00|INFO|SessionData|http-nio-6969-exec-10] update cached group testGroup 08:24:02 grafana | logger=ngalert.state.manager t=2024-04-26T08:21:37.838182775Z level=info msg="Running in alternative execution of Error/NoData mode" 08:24:02 kafka | [2024-04-26 08:22:04,256] INFO [Broker id=1] Leader __consumer_offsets-44 with topic id Some(RfiyP89qRi-5ZTNhftzAtg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 08:24:02 policy-pap | [2024-04-26T08:22:33.021+00:00|INFO|SessionData|http-nio-6969-exec-10] updating DB group testGroup 08:24:02 grafana | logger=infra.usagestats.collector t=2024-04-26T08:21:37.841068727Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2 08:24:02 kafka | [2024-04-26 08:22:04,262] INFO [LogLoader partition=__consumer_offsets-14, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 08:24:02 policy-pap | [2024-04-26T08:22:33.034+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-10] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeA, policy=onap.restart.tca 1.0.0, action=DEPLOYMENT, timestamp=2024-04-26T08:22:32Z, user=policyadmin), PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeC, policy=operational.apex.decisionMaker 1.0.0, action=DEPLOYMENT, timestamp=2024-04-26T08:22:33Z, user=policyadmin)] 08:24:02 grafana | logger=provisioning.datasources t=2024-04-26T08:21:37.84399201Z level=info msg="inserting datasource from configuration" name=PolicyPrometheus uid=dkSf71fnz 08:24:02 kafka | [2024-04-26 08:22:04,263] INFO Created log for partition __consumer_offsets-14 in /var/lib/kafka/data/__consumer_offsets-14 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 08:24:02 policy-pap | [2024-04-26T08:22:33.703+00:00|INFO|SessionData|http-nio-6969-exec-4] cache group testGroup 08:24:02 grafana | logger=provisioning.alerting t=2024-04-26T08:21:37.861576251Z level=info msg="starting to provision alerting" 08:24:02 kafka | [2024-04-26 08:22:04,263] INFO [Partition __consumer_offsets-14 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-14 (kafka.cluster.Partition) 08:24:02 policy-pap | [2024-04-26T08:22:33.703+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-4] remove policy onap.restart.tca 1.0.0 from subgroup testGroup pdpTypeA count=0 08:24:02 grafana | logger=provisioning.alerting t=2024-04-26T08:21:37.861601742Z level=info msg="finished to provision alerting" 08:24:02 kafka | [2024-04-26 08:22:04,263] INFO [Partition __consumer_offsets-14 broker=1] Log loaded for partition __consumer_offsets-14 with initial high watermark 0 (kafka.cluster.Partition) 08:24:02 policy-pap | [2024-04-26T08:22:33.703+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-4] Registering an undeploy for policy onap.restart.tca 1.0.0 08:24:02 grafana | logger=grafanaStorageLogger t=2024-04-26T08:21:37.861899518Z level=info msg="Storage starting" 08:24:02 kafka | [2024-04-26 08:22:04,263] INFO [Broker id=1] Leader __consumer_offsets-14 with topic id Some(RfiyP89qRi-5ZTNhftzAtg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 08:24:02 policy-pap | [2024-04-26T08:22:33.704+00:00|INFO|SessionData|http-nio-6969-exec-4] update cached group testGroup 08:24:02 grafana | logger=ngalert.state.manager t=2024-04-26T08:21:37.862279146Z level=info msg="Warming state cache for startup" 08:24:02 kafka | [2024-04-26 08:22:04,268] INFO [LogLoader partition=__consumer_offsets-23, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 08:24:02 policy-pap | [2024-04-26T08:22:33.704+00:00|INFO|SessionData|http-nio-6969-exec-4] updating DB group testGroup 08:24:02 grafana | logger=ngalert.multiorg.alertmanager t=2024-04-26T08:21:37.864087575Z level=info msg="Starting MultiOrg Alertmanager" 08:24:02 kafka | [2024-04-26 08:22:04,268] INFO Created log for partition __consumer_offsets-23 in /var/lib/kafka/data/__consumer_offsets-23 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 08:24:02 policy-pap | [2024-04-26T08:22:33.718+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-4] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeA, policy=onap.restart.tca 1.0.0, action=UNDEPLOYMENT, timestamp=2024-04-26T08:22:33Z, user=policyadmin)] 08:24:02 grafana | logger=http.server t=2024-04-26T08:21:37.864965987Z level=info msg="HTTP Server Listen" address=[::]:3000 protocol=http subUrl= socket= 08:24:02 kafka | [2024-04-26 08:22:04,269] INFO [Partition __consumer_offsets-23 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-23 (kafka.cluster.Partition) 08:24:02 policy-pap | [2024-04-26T08:22:34.073+00:00|INFO|SessionData|http-nio-6969-exec-5] cache group defaultGroup 08:24:02 grafana | logger=ngalert.state.manager t=2024-04-26T08:21:37.941829753Z level=info msg="State cache has been initialized" states=0 duration=79.546778ms 08:24:02 kafka | [2024-04-26 08:22:04,269] INFO [Partition __consumer_offsets-23 broker=1] Log loaded for partition __consumer_offsets-23 with initial high watermark 0 (kafka.cluster.Partition) 08:24:02 policy-pap | [2024-04-26T08:22:34.074+00:00|INFO|SessionData|http-nio-6969-exec-5] cache group testGroup 08:24:02 grafana | logger=provisioning.dashboard t=2024-04-26T08:21:37.942989159Z level=info msg="starting to provision dashboards" 08:24:02 kafka | [2024-04-26 08:22:04,269] INFO [Broker id=1] Leader __consumer_offsets-23 with topic id Some(RfiyP89qRi-5ZTNhftzAtg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 08:24:02 policy-pap | [2024-04-26T08:22:34.074+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-5] remove policy operational.apex.decisionMaker 1.0.0 from subgroup testGroup pdpTypeC count=0 08:24:02 grafana | logger=ngalert.scheduler t=2024-04-26T08:21:37.941886406Z level=info msg="Starting scheduler" tickInterval=10s maxAttempts=1 08:24:02 kafka | [2024-04-26 08:22:04,274] INFO [LogLoader partition=__consumer_offsets-38, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 08:24:02 policy-pap | [2024-04-26T08:22:34.074+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-5] Registering an undeploy for policy operational.apex.decisionMaker 1.0.0 08:24:02 grafana | logger=plugins.update.checker t=2024-04-26T08:21:37.951119608Z level=info msg="Update check succeeded" duration=89.133827ms 08:24:02 kafka | [2024-04-26 08:22:04,274] INFO Created log for partition __consumer_offsets-38 in /var/lib/kafka/data/__consumer_offsets-38 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 08:24:02 policy-pap | [2024-04-26T08:22:34.074+00:00|INFO|SessionData|http-nio-6969-exec-5] update cached group testGroup 08:24:02 grafana | logger=ticker t=2024-04-26T08:21:37.951469196Z level=info msg=starting first_tick=2024-04-26T08:21:40Z 08:24:02 kafka | [2024-04-26 08:22:04,274] INFO [Partition __consumer_offsets-38 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-38 (kafka.cluster.Partition) 08:24:02 policy-pap | [2024-04-26T08:22:34.074+00:00|INFO|SessionData|http-nio-6969-exec-5] updating DB group testGroup 08:24:02 grafana | logger=grafana.update.checker t=2024-04-26T08:21:37.971821703Z level=info msg="Update check succeeded" duration=107.400393ms 08:24:02 kafka | [2024-04-26 08:22:04,274] INFO [Partition __consumer_offsets-38 broker=1] Log loaded for partition __consumer_offsets-38 with initial high watermark 0 (kafka.cluster.Partition) 08:24:02 policy-pap | [2024-04-26T08:22:34.163+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-5] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeC, policy=operational.apex.decisionMaker 1.0.0, action=UNDEPLOYMENT, timestamp=2024-04-26T08:22:34Z, user=policyadmin)] 08:24:02 grafana | logger=sqlstore.transactions t=2024-04-26T08:21:37.985711342Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked" 08:24:02 kafka | [2024-04-26 08:22:04,274] INFO [Broker id=1] Leader __consumer_offsets-38 with topic id Some(RfiyP89qRi-5ZTNhftzAtg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 08:24:02 policy-pap | [2024-04-26T08:22:54.800+00:00|INFO|SessionData|http-nio-6969-exec-1] cache group testGroup 08:24:02 grafana | logger=sqlstore.transactions t=2024-04-26T08:21:37.997524981Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=1 code="database is locked" 08:24:02 kafka | [2024-04-26 08:22:04,287] INFO [LogLoader partition=__consumer_offsets-8, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 08:24:02 policy-pap | [2024-04-26T08:22:54.804+00:00|INFO|SessionData|http-nio-6969-exec-1] deleting DB group testGroup 08:24:02 grafana | logger=grafana-apiserver t=2024-04-26T08:21:38.074428499Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager" 08:24:02 kafka | [2024-04-26 08:22:04,287] INFO Created log for partition __consumer_offsets-8 in /var/lib/kafka/data/__consumer_offsets-8 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 08:24:02 policy-pap | [2024-04-26T08:22:54.813+00:00|INFO|TimerManager|Thread-9] update timer discarded (expired) Timer [name=c089a3a3-4fc1-43c0-a7be-21299199c004, expireMs=1714119774812] 08:24:02 grafana | logger=grafana-apiserver t=2024-04-26T08:21:38.075074061Z level=info msg="Adding GroupVersion featuretoggle.grafana.app v0alpha1 to ResourceManager" 08:24:02 kafka | [2024-04-26 08:22:04,287] INFO [Partition __consumer_offsets-8 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-8 (kafka.cluster.Partition) 08:24:02 policy-pap | [2024-04-26T08:22:54.882+00:00|INFO|TimerManager|Thread-10] state-change timer discarded (expired) Timer [name=a5f1d0a2-79e5-4903-b04d-2fc825203dbc, expireMs=1714119774882] 08:24:02 grafana | logger=sqlstore.transactions t=2024-04-26T08:21:38.13690921Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked" 08:24:02 kafka | [2024-04-26 08:22:04,288] INFO [Partition __consumer_offsets-8 broker=1] Log loaded for partition __consumer_offsets-8 with initial high watermark 0 (kafka.cluster.Partition) 08:24:02 grafana | logger=sqlstore.transactions t=2024-04-26T08:21:38.150624882Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=1 code="database is locked" 08:24:02 kafka | [2024-04-26 08:22:04,288] INFO [Broker id=1] Leader __consumer_offsets-8 with topic id Some(RfiyP89qRi-5ZTNhftzAtg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 08:24:02 grafana | logger=provisioning.dashboard t=2024-04-26T08:21:38.28040699Z level=info msg="finished to provision dashboards" 08:24:02 kafka | [2024-04-26 08:22:04,300] INFO [LogLoader partition=__consumer_offsets-45, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 08:24:02 grafana | logger=infra.usagestats t=2024-04-26T08:23:03.874147633Z level=info msg="Usage stats are ready to report" 08:24:02 kafka | [2024-04-26 08:22:04,300] INFO Created log for partition __consumer_offsets-45 in /var/lib/kafka/data/__consumer_offsets-45 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 08:24:02 kafka | [2024-04-26 08:22:04,300] INFO [Partition __consumer_offsets-45 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-45 (kafka.cluster.Partition) 08:24:02 kafka | [2024-04-26 08:22:04,300] INFO [Partition __consumer_offsets-45 broker=1] Log loaded for partition __consumer_offsets-45 with initial high watermark 0 (kafka.cluster.Partition) 08:24:02 kafka | [2024-04-26 08:22:04,301] INFO [Broker id=1] Leader __consumer_offsets-45 with topic id Some(RfiyP89qRi-5ZTNhftzAtg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 08:24:02 kafka | [2024-04-26 08:22:04,309] INFO [LogLoader partition=__consumer_offsets-15, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 08:24:02 kafka | [2024-04-26 08:22:04,309] INFO Created log for partition __consumer_offsets-15 in /var/lib/kafka/data/__consumer_offsets-15 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 08:24:02 kafka | [2024-04-26 08:22:04,309] INFO [Partition __consumer_offsets-15 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-15 (kafka.cluster.Partition) 08:24:02 kafka | [2024-04-26 08:22:04,309] INFO [Partition __consumer_offsets-15 broker=1] Log loaded for partition __consumer_offsets-15 with initial high watermark 0 (kafka.cluster.Partition) 08:24:02 kafka | [2024-04-26 08:22:04,310] INFO [Broker id=1] Leader __consumer_offsets-15 with topic id Some(RfiyP89qRi-5ZTNhftzAtg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 08:24:02 kafka | [2024-04-26 08:22:04,319] INFO [LogLoader partition=__consumer_offsets-30, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 08:24:02 kafka | [2024-04-26 08:22:04,320] INFO Created log for partition __consumer_offsets-30 in /var/lib/kafka/data/__consumer_offsets-30 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 08:24:02 kafka | [2024-04-26 08:22:04,320] INFO [Partition __consumer_offsets-30 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-30 (kafka.cluster.Partition) 08:24:02 kafka | [2024-04-26 08:22:04,320] INFO [Partition __consumer_offsets-30 broker=1] Log loaded for partition __consumer_offsets-30 with initial high watermark 0 (kafka.cluster.Partition) 08:24:02 kafka | [2024-04-26 08:22:04,320] INFO [Broker id=1] Leader __consumer_offsets-30 with topic id Some(RfiyP89qRi-5ZTNhftzAtg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 08:24:02 kafka | [2024-04-26 08:22:04,328] INFO [LogLoader partition=__consumer_offsets-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 08:24:02 kafka | [2024-04-26 08:22:04,328] INFO Created log for partition __consumer_offsets-0 in /var/lib/kafka/data/__consumer_offsets-0 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 08:24:02 kafka | [2024-04-26 08:22:04,328] INFO [Partition __consumer_offsets-0 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-0 (kafka.cluster.Partition) 08:24:02 kafka | [2024-04-26 08:22:04,329] INFO [Partition __consumer_offsets-0 broker=1] Log loaded for partition __consumer_offsets-0 with initial high watermark 0 (kafka.cluster.Partition) 08:24:02 kafka | [2024-04-26 08:22:04,329] INFO [Broker id=1] Leader __consumer_offsets-0 with topic id Some(RfiyP89qRi-5ZTNhftzAtg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 08:24:02 kafka | [2024-04-26 08:22:04,337] INFO [LogLoader partition=__consumer_offsets-35, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 08:24:02 kafka | [2024-04-26 08:22:04,337] INFO Created log for partition __consumer_offsets-35 in /var/lib/kafka/data/__consumer_offsets-35 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 08:24:02 kafka | [2024-04-26 08:22:04,337] INFO [Partition __consumer_offsets-35 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-35 (kafka.cluster.Partition) 08:24:02 kafka | [2024-04-26 08:22:04,337] INFO [Partition __consumer_offsets-35 broker=1] Log loaded for partition __consumer_offsets-35 with initial high watermark 0 (kafka.cluster.Partition) 08:24:02 kafka | [2024-04-26 08:22:04,338] INFO [Broker id=1] Leader __consumer_offsets-35 with topic id Some(RfiyP89qRi-5ZTNhftzAtg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 08:24:02 kafka | [2024-04-26 08:22:04,343] INFO [LogLoader partition=__consumer_offsets-5, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 08:24:02 kafka | [2024-04-26 08:22:04,344] INFO Created log for partition __consumer_offsets-5 in /var/lib/kafka/data/__consumer_offsets-5 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 08:24:02 kafka | [2024-04-26 08:22:04,344] INFO [Partition __consumer_offsets-5 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-5 (kafka.cluster.Partition) 08:24:02 kafka | [2024-04-26 08:22:04,344] INFO [Partition __consumer_offsets-5 broker=1] Log loaded for partition __consumer_offsets-5 with initial high watermark 0 (kafka.cluster.Partition) 08:24:02 kafka | [2024-04-26 08:22:04,345] INFO [Broker id=1] Leader __consumer_offsets-5 with topic id Some(RfiyP89qRi-5ZTNhftzAtg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 08:24:02 kafka | [2024-04-26 08:22:04,355] INFO [LogLoader partition=__consumer_offsets-20, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 08:24:02 kafka | [2024-04-26 08:22:04,356] INFO Created log for partition __consumer_offsets-20 in /var/lib/kafka/data/__consumer_offsets-20 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 08:24:02 kafka | [2024-04-26 08:22:04,356] INFO [Partition __consumer_offsets-20 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-20 (kafka.cluster.Partition) 08:24:02 kafka | [2024-04-26 08:22:04,356] INFO [Partition __consumer_offsets-20 broker=1] Log loaded for partition __consumer_offsets-20 with initial high watermark 0 (kafka.cluster.Partition) 08:24:02 kafka | [2024-04-26 08:22:04,357] INFO [Broker id=1] Leader __consumer_offsets-20 with topic id Some(RfiyP89qRi-5ZTNhftzAtg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 08:24:02 kafka | [2024-04-26 08:22:04,364] INFO [LogLoader partition=__consumer_offsets-27, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 08:24:02 kafka | [2024-04-26 08:22:04,365] INFO Created log for partition __consumer_offsets-27 in /var/lib/kafka/data/__consumer_offsets-27 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 08:24:02 kafka | [2024-04-26 08:22:04,365] INFO [Partition __consumer_offsets-27 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-27 (kafka.cluster.Partition) 08:24:02 kafka | [2024-04-26 08:22:04,365] INFO [Partition __consumer_offsets-27 broker=1] Log loaded for partition __consumer_offsets-27 with initial high watermark 0 (kafka.cluster.Partition) 08:24:02 kafka | [2024-04-26 08:22:04,365] INFO [Broker id=1] Leader __consumer_offsets-27 with topic id Some(RfiyP89qRi-5ZTNhftzAtg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 08:24:02 kafka | [2024-04-26 08:22:04,371] INFO [LogLoader partition=__consumer_offsets-42, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 08:24:02 kafka | [2024-04-26 08:22:04,372] INFO Created log for partition __consumer_offsets-42 in /var/lib/kafka/data/__consumer_offsets-42 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 08:24:02 kafka | [2024-04-26 08:22:04,372] INFO [Partition __consumer_offsets-42 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-42 (kafka.cluster.Partition) 08:24:02 kafka | [2024-04-26 08:22:04,372] INFO [Partition __consumer_offsets-42 broker=1] Log loaded for partition __consumer_offsets-42 with initial high watermark 0 (kafka.cluster.Partition) 08:24:02 kafka | [2024-04-26 08:22:04,372] INFO [Broker id=1] Leader __consumer_offsets-42 with topic id Some(RfiyP89qRi-5ZTNhftzAtg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 08:24:02 kafka | [2024-04-26 08:22:04,378] INFO [LogLoader partition=__consumer_offsets-12, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 08:24:02 kafka | [2024-04-26 08:22:04,378] INFO Created log for partition __consumer_offsets-12 in /var/lib/kafka/data/__consumer_offsets-12 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 08:24:02 kafka | [2024-04-26 08:22:04,378] INFO [Partition __consumer_offsets-12 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-12 (kafka.cluster.Partition) 08:24:02 kafka | [2024-04-26 08:22:04,379] INFO [Partition __consumer_offsets-12 broker=1] Log loaded for partition __consumer_offsets-12 with initial high watermark 0 (kafka.cluster.Partition) 08:24:02 kafka | [2024-04-26 08:22:04,379] INFO [Broker id=1] Leader __consumer_offsets-12 with topic id Some(RfiyP89qRi-5ZTNhftzAtg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 08:24:02 kafka | [2024-04-26 08:22:04,385] INFO [LogLoader partition=__consumer_offsets-21, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 08:24:02 kafka | [2024-04-26 08:22:04,385] INFO Created log for partition __consumer_offsets-21 in /var/lib/kafka/data/__consumer_offsets-21 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 08:24:02 kafka | [2024-04-26 08:22:04,386] INFO [Partition __consumer_offsets-21 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-21 (kafka.cluster.Partition) 08:24:02 kafka | [2024-04-26 08:22:04,386] INFO [Partition __consumer_offsets-21 broker=1] Log loaded for partition __consumer_offsets-21 with initial high watermark 0 (kafka.cluster.Partition) 08:24:02 kafka | [2024-04-26 08:22:04,386] INFO [Broker id=1] Leader __consumer_offsets-21 with topic id Some(RfiyP89qRi-5ZTNhftzAtg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 08:24:02 kafka | [2024-04-26 08:22:04,392] INFO [LogLoader partition=__consumer_offsets-36, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 08:24:02 kafka | [2024-04-26 08:22:04,392] INFO Created log for partition __consumer_offsets-36 in /var/lib/kafka/data/__consumer_offsets-36 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 08:24:02 kafka | [2024-04-26 08:22:04,393] INFO [Partition __consumer_offsets-36 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-36 (kafka.cluster.Partition) 08:24:02 kafka | [2024-04-26 08:22:04,393] INFO [Partition __consumer_offsets-36 broker=1] Log loaded for partition __consumer_offsets-36 with initial high watermark 0 (kafka.cluster.Partition) 08:24:02 kafka | [2024-04-26 08:22:04,393] INFO [Broker id=1] Leader __consumer_offsets-36 with topic id Some(RfiyP89qRi-5ZTNhftzAtg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 08:24:02 kafka | [2024-04-26 08:22:04,400] INFO [LogLoader partition=__consumer_offsets-6, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 08:24:02 kafka | [2024-04-26 08:22:04,401] INFO Created log for partition __consumer_offsets-6 in /var/lib/kafka/data/__consumer_offsets-6 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 08:24:02 kafka | [2024-04-26 08:22:04,401] INFO [Partition __consumer_offsets-6 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-6 (kafka.cluster.Partition) 08:24:02 kafka | [2024-04-26 08:22:04,401] INFO [Partition __consumer_offsets-6 broker=1] Log loaded for partition __consumer_offsets-6 with initial high watermark 0 (kafka.cluster.Partition) 08:24:02 kafka | [2024-04-26 08:22:04,401] INFO [Broker id=1] Leader __consumer_offsets-6 with topic id Some(RfiyP89qRi-5ZTNhftzAtg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 08:24:02 kafka | [2024-04-26 08:22:04,407] INFO [LogLoader partition=__consumer_offsets-43, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 08:24:02 kafka | [2024-04-26 08:22:04,407] INFO Created log for partition __consumer_offsets-43 in /var/lib/kafka/data/__consumer_offsets-43 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 08:24:02 kafka | [2024-04-26 08:22:04,407] INFO [Partition __consumer_offsets-43 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-43 (kafka.cluster.Partition) 08:24:02 kafka | [2024-04-26 08:22:04,408] INFO [Partition __consumer_offsets-43 broker=1] Log loaded for partition __consumer_offsets-43 with initial high watermark 0 (kafka.cluster.Partition) 08:24:02 kafka | [2024-04-26 08:22:04,408] INFO [Broker id=1] Leader __consumer_offsets-43 with topic id Some(RfiyP89qRi-5ZTNhftzAtg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 08:24:02 kafka | [2024-04-26 08:22:04,414] INFO [LogLoader partition=__consumer_offsets-13, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 08:24:02 kafka | [2024-04-26 08:22:04,415] INFO Created log for partition __consumer_offsets-13 in /var/lib/kafka/data/__consumer_offsets-13 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 08:24:02 kafka | [2024-04-26 08:22:04,415] INFO [Partition __consumer_offsets-13 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-13 (kafka.cluster.Partition) 08:24:02 kafka | [2024-04-26 08:22:04,415] INFO [Partition __consumer_offsets-13 broker=1] Log loaded for partition __consumer_offsets-13 with initial high watermark 0 (kafka.cluster.Partition) 08:24:02 kafka | [2024-04-26 08:22:04,415] INFO [Broker id=1] Leader __consumer_offsets-13 with topic id Some(RfiyP89qRi-5ZTNhftzAtg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 08:24:02 kafka | [2024-04-26 08:22:04,421] INFO [LogLoader partition=__consumer_offsets-28, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 08:24:02 kafka | [2024-04-26 08:22:04,422] INFO Created log for partition __consumer_offsets-28 in /var/lib/kafka/data/__consumer_offsets-28 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 08:24:02 kafka | [2024-04-26 08:22:04,422] INFO [Partition __consumer_offsets-28 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-28 (kafka.cluster.Partition) 08:24:02 kafka | [2024-04-26 08:22:04,422] INFO [Partition __consumer_offsets-28 broker=1] Log loaded for partition __consumer_offsets-28 with initial high watermark 0 (kafka.cluster.Partition) 08:24:02 kafka | [2024-04-26 08:22:04,422] INFO [Broker id=1] Leader __consumer_offsets-28 with topic id Some(RfiyP89qRi-5ZTNhftzAtg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 08:24:02 kafka | [2024-04-26 08:22:04,426] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-3 (state.change.logger) 08:24:02 kafka | [2024-04-26 08:22:04,426] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-18 (state.change.logger) 08:24:02 kafka | [2024-04-26 08:22:04,426] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-41 (state.change.logger) 08:24:02 kafka | [2024-04-26 08:22:04,427] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-10 (state.change.logger) 08:24:02 kafka | [2024-04-26 08:22:04,427] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-33 (state.change.logger) 08:24:02 kafka | [2024-04-26 08:22:04,427] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-48 (state.change.logger) 08:24:02 kafka | [2024-04-26 08:22:04,427] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-19 (state.change.logger) 08:24:02 kafka | [2024-04-26 08:22:04,427] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-34 (state.change.logger) 08:24:02 kafka | [2024-04-26 08:22:04,427] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-4 (state.change.logger) 08:24:02 kafka | [2024-04-26 08:22:04,427] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-11 (state.change.logger) 08:24:02 kafka | [2024-04-26 08:22:04,427] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-26 (state.change.logger) 08:24:02 kafka | [2024-04-26 08:22:04,427] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-49 (state.change.logger) 08:24:02 kafka | [2024-04-26 08:22:04,427] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-39 (state.change.logger) 08:24:02 kafka | [2024-04-26 08:22:04,427] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-9 (state.change.logger) 08:24:02 kafka | [2024-04-26 08:22:04,428] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-24 (state.change.logger) 08:24:02 kafka | [2024-04-26 08:22:04,428] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-31 (state.change.logger) 08:24:02 kafka | [2024-04-26 08:22:04,428] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-46 (state.change.logger) 08:24:02 kafka | [2024-04-26 08:22:04,428] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-1 (state.change.logger) 08:24:02 kafka | [2024-04-26 08:22:04,428] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-16 (state.change.logger) 08:24:02 kafka | [2024-04-26 08:22:04,428] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-2 (state.change.logger) 08:24:02 kafka | [2024-04-26 08:22:04,428] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-25 (state.change.logger) 08:24:02 kafka | [2024-04-26 08:22:04,428] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-40 (state.change.logger) 08:24:02 kafka | [2024-04-26 08:22:04,428] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-47 (state.change.logger) 08:24:02 kafka | [2024-04-26 08:22:04,428] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-17 (state.change.logger) 08:24:02 kafka | [2024-04-26 08:22:04,428] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-32 (state.change.logger) 08:24:02 kafka | [2024-04-26 08:22:04,428] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-37 (state.change.logger) 08:24:02 kafka | [2024-04-26 08:22:04,429] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-7 (state.change.logger) 08:24:02 kafka | [2024-04-26 08:22:04,429] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-22 (state.change.logger) 08:24:02 kafka | [2024-04-26 08:22:04,429] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-29 (state.change.logger) 08:24:02 kafka | [2024-04-26 08:22:04,429] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-44 (state.change.logger) 08:24:02 kafka | [2024-04-26 08:22:04,429] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-14 (state.change.logger) 08:24:02 kafka | [2024-04-26 08:22:04,429] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-23 (state.change.logger) 08:24:02 kafka | [2024-04-26 08:22:04,429] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-38 (state.change.logger) 08:24:02 kafka | [2024-04-26 08:22:04,429] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-8 (state.change.logger) 08:24:02 kafka | [2024-04-26 08:22:04,429] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-45 (state.change.logger) 08:24:02 kafka | [2024-04-26 08:22:04,429] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-15 (state.change.logger) 08:24:02 kafka | [2024-04-26 08:22:04,429] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-30 (state.change.logger) 08:24:02 kafka | [2024-04-26 08:22:04,429] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-0 (state.change.logger) 08:24:02 kafka | [2024-04-26 08:22:04,430] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-35 (state.change.logger) 08:24:02 kafka | [2024-04-26 08:22:04,430] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-5 (state.change.logger) 08:24:02 kafka | [2024-04-26 08:22:04,430] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-20 (state.change.logger) 08:24:02 kafka | [2024-04-26 08:22:04,430] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-27 (state.change.logger) 08:24:02 kafka | [2024-04-26 08:22:04,430] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-42 (state.change.logger) 08:24:02 kafka | [2024-04-26 08:22:04,430] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-12 (state.change.logger) 08:24:02 kafka | [2024-04-26 08:22:04,430] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-21 (state.change.logger) 08:24:02 kafka | [2024-04-26 08:22:04,430] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-36 (state.change.logger) 08:24:02 kafka | [2024-04-26 08:22:04,430] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-6 (state.change.logger) 08:24:02 kafka | [2024-04-26 08:22:04,430] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-43 (state.change.logger) 08:24:02 kafka | [2024-04-26 08:22:04,430] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-13 (state.change.logger) 08:24:02 kafka | [2024-04-26 08:22:04,431] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 3 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-28 (state.change.logger) 08:24:02 kafka | [2024-04-26 08:22:04,432] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 3 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 08:24:02 kafka | [2024-04-26 08:22:04,434] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-3 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 08:24:02 kafka | [2024-04-26 08:22:04,435] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 18 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 08:24:02 kafka | [2024-04-26 08:22:04,435] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-18 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 08:24:02 kafka | [2024-04-26 08:22:04,436] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 41 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 08:24:02 kafka | [2024-04-26 08:22:04,436] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-41 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 08:24:02 kafka | [2024-04-26 08:22:04,436] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 10 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 08:24:02 kafka | [2024-04-26 08:22:04,436] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-10 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 08:24:02 kafka | [2024-04-26 08:22:04,436] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 33 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 08:24:02 kafka | [2024-04-26 08:22:04,436] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-33 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 08:24:02 kafka | [2024-04-26 08:22:04,436] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 48 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 08:24:02 kafka | [2024-04-26 08:22:04,436] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-48 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 08:24:02 kafka | [2024-04-26 08:22:04,436] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 19 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 08:24:02 kafka | [2024-04-26 08:22:04,436] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-19 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 08:24:02 kafka | [2024-04-26 08:22:04,436] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 34 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 08:24:02 kafka | [2024-04-26 08:22:04,436] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-34 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 08:24:02 kafka | [2024-04-26 08:22:04,436] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 4 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 08:24:02 kafka | [2024-04-26 08:22:04,436] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-4 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 08:24:02 kafka | [2024-04-26 08:22:04,436] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 11 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 08:24:02 kafka | [2024-04-26 08:22:04,436] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-11 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 08:24:02 kafka | [2024-04-26 08:22:04,437] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 26 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 08:24:02 kafka | [2024-04-26 08:22:04,437] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-26 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 08:24:02 kafka | [2024-04-26 08:22:04,437] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 49 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 08:24:02 kafka | [2024-04-26 08:22:04,437] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-49 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 08:24:02 kafka | [2024-04-26 08:22:04,437] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 39 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 08:24:02 kafka | [2024-04-26 08:22:04,437] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-39 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 08:24:02 kafka | [2024-04-26 08:22:04,437] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 9 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 08:24:02 kafka | [2024-04-26 08:22:04,437] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-9 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 08:24:02 kafka | [2024-04-26 08:22:04,437] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 24 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 08:24:02 kafka | [2024-04-26 08:22:04,437] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-24 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 08:24:02 kafka | [2024-04-26 08:22:04,437] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 31 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 08:24:02 kafka | [2024-04-26 08:22:04,437] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-31 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 08:24:02 kafka | [2024-04-26 08:22:04,437] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 46 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 08:24:02 kafka | [2024-04-26 08:22:04,437] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-46 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 08:24:02 kafka | [2024-04-26 08:22:04,437] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 1 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 08:24:02 kafka | [2024-04-26 08:22:04,438] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-1 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 08:24:02 kafka | [2024-04-26 08:22:04,438] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 16 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 08:24:02 kafka | [2024-04-26 08:22:04,438] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-16 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 08:24:02 kafka | [2024-04-26 08:22:04,438] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 2 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 08:24:02 kafka | [2024-04-26 08:22:04,438] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-2 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 08:24:02 kafka | [2024-04-26 08:22:04,438] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 25 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 08:24:02 kafka | [2024-04-26 08:22:04,438] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-25 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 08:24:02 kafka | [2024-04-26 08:22:04,438] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 40 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 08:24:02 kafka | [2024-04-26 08:22:04,438] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-40 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 08:24:02 kafka | [2024-04-26 08:22:04,438] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 47 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 08:24:02 kafka | [2024-04-26 08:22:04,438] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-47 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 08:24:02 kafka | [2024-04-26 08:22:04,438] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 17 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 08:24:02 kafka | [2024-04-26 08:22:04,438] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-17 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 08:24:02 kafka | [2024-04-26 08:22:04,438] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 32 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 08:24:02 kafka | [2024-04-26 08:22:04,439] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-32 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 08:24:02 kafka | [2024-04-26 08:22:04,439] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 37 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 08:24:02 kafka | [2024-04-26 08:22:04,439] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-37 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 08:24:02 kafka | [2024-04-26 08:22:04,439] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 7 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 08:24:02 kafka | [2024-04-26 08:22:04,439] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-7 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 08:24:02 kafka | [2024-04-26 08:22:04,439] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 22 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 08:24:02 kafka | [2024-04-26 08:22:04,439] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-22 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 08:24:02 kafka | [2024-04-26 08:22:04,439] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 29 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 08:24:02 kafka | [2024-04-26 08:22:04,439] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-29 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 08:24:02 kafka | [2024-04-26 08:22:04,439] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 44 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 08:24:02 kafka | [2024-04-26 08:22:04,439] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-44 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 08:24:02 kafka | [2024-04-26 08:22:04,439] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 14 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 08:24:02 kafka | [2024-04-26 08:22:04,439] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-14 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 08:24:02 kafka | [2024-04-26 08:22:04,439] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 23 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 08:24:02 kafka | [2024-04-26 08:22:04,439] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-23 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 08:24:02 kafka | [2024-04-26 08:22:04,440] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 38 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 08:24:02 kafka | [2024-04-26 08:22:04,440] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-38 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 08:24:02 kafka | [2024-04-26 08:22:04,440] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 8 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 08:24:02 kafka | [2024-04-26 08:22:04,440] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-8 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 08:24:02 kafka | [2024-04-26 08:22:04,440] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 45 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 08:24:02 kafka | [2024-04-26 08:22:04,440] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-45 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 08:24:02 kafka | [2024-04-26 08:22:04,440] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 15 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 08:24:02 kafka | [2024-04-26 08:22:04,440] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-15 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 08:24:02 kafka | [2024-04-26 08:22:04,440] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 30 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 08:24:02 kafka | [2024-04-26 08:22:04,440] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-30 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 08:24:02 kafka | [2024-04-26 08:22:04,440] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 0 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 08:24:02 kafka | [2024-04-26 08:22:04,440] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-0 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 08:24:02 kafka | [2024-04-26 08:22:04,440] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 35 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 08:24:02 kafka | [2024-04-26 08:22:04,440] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-35 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 08:24:02 kafka | [2024-04-26 08:22:04,440] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 5 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 08:24:02 kafka | [2024-04-26 08:22:04,441] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-5 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 08:24:02 kafka | [2024-04-26 08:22:04,441] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 20 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 08:24:02 kafka | [2024-04-26 08:22:04,441] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-20 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 08:24:02 kafka | [2024-04-26 08:22:04,441] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-3 in 6 milliseconds for epoch 0, of which 2 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 08:24:02 kafka | [2024-04-26 08:22:04,441] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 27 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 08:24:02 kafka | [2024-04-26 08:22:04,441] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-27 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 08:24:02 kafka | [2024-04-26 08:22:04,441] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-18 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 08:24:02 kafka | [2024-04-26 08:22:04,441] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 42 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 08:24:02 kafka | [2024-04-26 08:22:04,442] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-42 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 08:24:02 kafka | [2024-04-26 08:22:04,442] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-41 in 6 milliseconds for epoch 0, of which 5 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 08:24:02 kafka | [2024-04-26 08:22:04,442] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 12 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 08:24:02 kafka | [2024-04-26 08:22:04,442] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-10 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 08:24:02 kafka | [2024-04-26 08:22:04,442] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-12 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 08:24:02 kafka | [2024-04-26 08:22:04,442] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-33 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 08:24:02 kafka | [2024-04-26 08:22:04,442] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 21 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 08:24:02 kafka | [2024-04-26 08:22:04,442] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-48 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 08:24:02 kafka | [2024-04-26 08:22:04,442] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-21 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 08:24:02 kafka | [2024-04-26 08:22:04,442] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-19 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 08:24:02 kafka | [2024-04-26 08:22:04,442] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 36 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 08:24:02 kafka | [2024-04-26 08:22:04,442] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-34 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 08:24:02 kafka | [2024-04-26 08:22:04,442] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-36 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 08:24:02 kafka | [2024-04-26 08:22:04,442] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-4 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 08:24:02 kafka | [2024-04-26 08:22:04,443] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 6 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 08:24:02 kafka | [2024-04-26 08:22:04,443] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-11 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 08:24:02 kafka | [2024-04-26 08:22:04,443] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-6 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 08:24:02 kafka | [2024-04-26 08:22:04,443] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-26 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 08:24:02 kafka | [2024-04-26 08:22:04,443] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 43 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 08:24:02 kafka | [2024-04-26 08:22:04,443] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-49 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 08:24:02 kafka | [2024-04-26 08:22:04,443] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-43 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 08:24:02 kafka | [2024-04-26 08:22:04,443] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-39 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 08:24:02 kafka | [2024-04-26 08:22:04,443] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 13 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 08:24:02 kafka | [2024-04-26 08:22:04,443] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-9 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 08:24:02 kafka | [2024-04-26 08:22:04,443] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-13 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 08:24:02 kafka | [2024-04-26 08:22:04,443] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-24 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 08:24:02 kafka | [2024-04-26 08:22:04,443] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 28 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 08:24:02 kafka | [2024-04-26 08:22:04,443] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-31 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 08:24:02 kafka | [2024-04-26 08:22:04,444] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-28 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 08:24:02 kafka | [2024-04-26 08:22:04,444] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-46 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 08:24:02 kafka | [2024-04-26 08:22:04,444] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-1 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 08:24:02 kafka | [2024-04-26 08:22:04,444] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-16 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 08:24:02 kafka | [2024-04-26 08:22:04,444] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-2 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 08:24:02 kafka | [2024-04-26 08:22:04,444] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-25 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 08:24:02 kafka | [2024-04-26 08:22:04,444] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-40 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 08:24:02 kafka | [2024-04-26 08:22:04,444] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-47 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 08:24:02 kafka | [2024-04-26 08:22:04,445] INFO [Broker id=1] Finished LeaderAndIsr request in 696ms correlationId 3 from controller 1 for 50 partitions (state.change.logger) 08:24:02 kafka | [2024-04-26 08:22:04,445] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-17 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 08:24:02 kafka | [2024-04-26 08:22:04,445] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-32 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 08:24:02 kafka | [2024-04-26 08:22:04,446] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-37 in 7 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 08:24:02 kafka | [2024-04-26 08:22:04,446] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-7 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 08:24:02 kafka | [2024-04-26 08:22:04,446] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-22 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 08:24:02 kafka | [2024-04-26 08:22:04,446] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-29 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 08:24:02 kafka | [2024-04-26 08:22:04,446] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-44 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 08:24:02 kafka | [2024-04-26 08:22:04,446] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-14 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 08:24:02 kafka | [2024-04-26 08:22:04,446] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-23 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 08:24:02 kafka | [2024-04-26 08:22:04,446] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-38 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 08:24:02 kafka | [2024-04-26 08:22:04,446] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-8 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 08:24:02 kafka | [2024-04-26 08:22:04,447] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=RfiyP89qRi-5ZTNhftzAtg, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=13, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=46, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=9, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=42, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=21, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=17, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=30, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=26, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=5, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=38, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=1, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=34, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=16, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=45, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=12, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=41, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=24, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=20, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=49, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=29, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=25, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=8, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=37, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=4, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=33, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=15, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=48, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=11, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=44, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=23, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=19, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=32, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=28, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=7, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=40, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=3, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=36, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=47, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=14, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=43, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=10, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=22, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=18, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=31, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=27, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=39, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=6, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=35, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=2, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 3 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 08:24:02 kafka | [2024-04-26 08:22:04,449] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-13 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 08:24:02 kafka | [2024-04-26 08:22:04,449] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-46 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 08:24:02 kafka | [2024-04-26 08:22:04,449] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-9 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 08:24:02 kafka | [2024-04-26 08:22:04,449] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-42 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 08:24:02 kafka | [2024-04-26 08:22:04,449] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-21 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 08:24:02 kafka | [2024-04-26 08:22:04,449] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-17 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 08:24:02 kafka | [2024-04-26 08:22:04,449] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-30 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 08:24:02 kafka | [2024-04-26 08:22:04,449] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-26 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 08:24:02 kafka | [2024-04-26 08:22:04,449] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-5 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 08:24:02 kafka | [2024-04-26 08:22:04,449] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-38 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 08:24:02 kafka | [2024-04-26 08:22:04,449] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-1 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 08:24:02 kafka | [2024-04-26 08:22:04,449] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-34 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 08:24:02 kafka | [2024-04-26 08:22:04,449] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-16 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 08:24:02 kafka | [2024-04-26 08:22:04,449] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-45 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 08:24:02 kafka | [2024-04-26 08:22:04,449] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-12 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 08:24:02 kafka | [2024-04-26 08:22:04,449] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-41 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 08:24:02 kafka | [2024-04-26 08:22:04,449] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-24 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 08:24:02 kafka | [2024-04-26 08:22:04,449] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-20 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 08:24:02 kafka | [2024-04-26 08:22:04,449] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-49 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 08:24:02 kafka | [2024-04-26 08:22:04,449] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 08:24:02 kafka | [2024-04-26 08:22:04,449] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-29 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 08:24:02 kafka | [2024-04-26 08:22:04,449] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-25 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 08:24:02 kafka | [2024-04-26 08:22:04,450] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-8 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 08:24:02 kafka | [2024-04-26 08:22:04,450] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-37 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 08:24:02 kafka | [2024-04-26 08:22:04,450] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-4 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 08:24:02 kafka | [2024-04-26 08:22:04,450] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-33 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 08:24:02 kafka | [2024-04-26 08:22:04,450] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-15 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 08:24:02 kafka | [2024-04-26 08:22:04,450] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-48 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 08:24:02 kafka | [2024-04-26 08:22:04,450] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-11 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 08:24:02 kafka | [2024-04-26 08:22:04,450] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-44 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 08:24:02 kafka | [2024-04-26 08:22:04,450] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-23 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 08:24:02 kafka | [2024-04-26 08:22:04,450] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-19 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 08:24:02 kafka | [2024-04-26 08:22:04,450] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-32 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 08:24:02 kafka | [2024-04-26 08:22:04,450] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-28 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 08:24:02 kafka | [2024-04-26 08:22:04,450] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-7 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 08:24:02 kafka | [2024-04-26 08:22:04,450] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-40 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 08:24:02 kafka | [2024-04-26 08:22:04,450] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-3 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 08:24:02 kafka | [2024-04-26 08:22:04,450] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-36 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 08:24:02 kafka | [2024-04-26 08:22:04,450] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-47 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 08:24:02 kafka | [2024-04-26 08:22:04,450] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-14 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 08:24:02 kafka | [2024-04-26 08:22:04,450] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-43 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 08:24:02 kafka | [2024-04-26 08:22:04,450] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-10 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 08:24:02 kafka | [2024-04-26 08:22:04,450] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-22 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 08:24:02 kafka | [2024-04-26 08:22:04,450] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-18 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 08:24:02 kafka | [2024-04-26 08:22:04,450] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-31 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 08:24:02 kafka | [2024-04-26 08:22:04,450] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-27 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 08:24:02 kafka | [2024-04-26 08:22:04,450] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-39 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 08:24:02 kafka | [2024-04-26 08:22:04,450] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-6 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 08:24:02 kafka | [2024-04-26 08:22:04,450] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-35 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 08:24:02 kafka | [2024-04-26 08:22:04,450] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-2 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 08:24:02 kafka | [2024-04-26 08:22:04,451] INFO [Broker id=1] Add 50 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 4 (state.change.logger) 08:24:02 kafka | [2024-04-26 08:22:04,451] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 4 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 08:24:02 kafka | [2024-04-26 08:22:04,447] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-45 in 7 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 08:24:02 kafka | [2024-04-26 08:22:04,453] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-15 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 08:24:02 kafka | [2024-04-26 08:22:04,453] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-30 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 08:24:02 kafka | [2024-04-26 08:22:04,453] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-0 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 08:24:02 kafka | [2024-04-26 08:22:04,453] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-35 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 08:24:02 kafka | [2024-04-26 08:22:04,453] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-5 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 08:24:02 kafka | [2024-04-26 08:22:04,453] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-20 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 08:24:02 kafka | [2024-04-26 08:22:04,453] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-27 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 08:24:02 kafka | [2024-04-26 08:22:04,453] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-42 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 08:24:02 kafka | [2024-04-26 08:22:04,453] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-12 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 08:24:02 kafka | [2024-04-26 08:22:04,453] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-21 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 08:24:02 kafka | [2024-04-26 08:22:04,453] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-36 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 08:24:02 kafka | [2024-04-26 08:22:04,453] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-6 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 08:24:02 kafka | [2024-04-26 08:22:04,454] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-43 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 08:24:02 kafka | [2024-04-26 08:22:04,454] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-13 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 08:24:02 kafka | [2024-04-26 08:22:04,454] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-28 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 08:24:02 kafka | [2024-04-26 08:22:04,481] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group policy-pap in Empty state. Created a new member id consumer-policy-pap-4-7cd46dae-964c-49b4-95ec-bd835d00b3b4 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 08:24:02 kafka | [2024-04-26 08:22:04,495] INFO [GroupCoordinator 1]: Preparing to rebalance group policy-pap in state PreparingRebalance with old generation 0 (__consumer_offsets-24) (reason: Adding new member consumer-policy-pap-4-7cd46dae-964c-49b4-95ec-bd835d00b3b4 with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) 08:24:02 kafka | [2024-04-26 08:22:04,556] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group db954cd2-8764-4a44-90af-3bb7f2069f83 in Empty state. Created a new member id consumer-db954cd2-8764-4a44-90af-3bb7f2069f83-3-7fda7423-43b4-4275-9f43-73657387fac9 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 08:24:02 kafka | [2024-04-26 08:22:04,559] INFO [GroupCoordinator 1]: Preparing to rebalance group db954cd2-8764-4a44-90af-3bb7f2069f83 in state PreparingRebalance with old generation 0 (__consumer_offsets-21) (reason: Adding new member consumer-db954cd2-8764-4a44-90af-3bb7f2069f83-3-7fda7423-43b4-4275-9f43-73657387fac9 with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) 08:24:02 kafka | [2024-04-26 08:22:05,034] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group 385d2de3-e329-4c2e-8254-58c110e4f277 in Empty state. Created a new member id consumer-385d2de3-e329-4c2e-8254-58c110e4f277-2-623aa870-0e4f-4435-b6a7-fae0c0299f99 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 08:24:02 kafka | [2024-04-26 08:22:05,037] INFO [GroupCoordinator 1]: Preparing to rebalance group 385d2de3-e329-4c2e-8254-58c110e4f277 in state PreparingRebalance with old generation 0 (__consumer_offsets-27) (reason: Adding new member consumer-385d2de3-e329-4c2e-8254-58c110e4f277-2-623aa870-0e4f-4435-b6a7-fae0c0299f99 with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) 08:24:02 kafka | [2024-04-26 08:22:07,509] INFO [GroupCoordinator 1]: Stabilized group policy-pap generation 1 (__consumer_offsets-24) with 1 members (kafka.coordinator.group.GroupCoordinator) 08:24:02 kafka | [2024-04-26 08:22:07,529] INFO [GroupCoordinator 1]: Assignment received from leader consumer-policy-pap-4-7cd46dae-964c-49b4-95ec-bd835d00b3b4 for group policy-pap for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 08:24:02 kafka | [2024-04-26 08:22:07,560] INFO [GroupCoordinator 1]: Stabilized group db954cd2-8764-4a44-90af-3bb7f2069f83 generation 1 (__consumer_offsets-21) with 1 members (kafka.coordinator.group.GroupCoordinator) 08:24:02 kafka | [2024-04-26 08:22:07,566] INFO [GroupCoordinator 1]: Assignment received from leader consumer-db954cd2-8764-4a44-90af-3bb7f2069f83-3-7fda7423-43b4-4275-9f43-73657387fac9 for group db954cd2-8764-4a44-90af-3bb7f2069f83 for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 08:24:02 kafka | [2024-04-26 08:22:08,038] INFO [GroupCoordinator 1]: Stabilized group 385d2de3-e329-4c2e-8254-58c110e4f277 generation 1 (__consumer_offsets-27) with 1 members (kafka.coordinator.group.GroupCoordinator) 08:24:02 kafka | [2024-04-26 08:22:08,053] INFO [GroupCoordinator 1]: Assignment received from leader consumer-385d2de3-e329-4c2e-8254-58c110e4f277-2-623aa870-0e4f-4435-b6a7-fae0c0299f99 for group 385d2de3-e329-4c2e-8254-58c110e4f277 for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 08:24:02 ++ echo 'Tearing down containers...' 08:24:02 Tearing down containers... 08:24:02 ++ docker-compose down -v --remove-orphans 08:24:02 Stopping policy-apex-pdp ... 08:24:02 Stopping grafana ... 08:24:02 Stopping policy-pap ... 08:24:02 Stopping kafka ... 08:24:02 Stopping policy-api ... 08:24:02 Stopping zookeeper ... 08:24:02 Stopping simulator ... 08:24:02 Stopping prometheus ... 08:24:02 Stopping mariadb ... 08:24:03 Stopping grafana ... done 08:24:03 Stopping prometheus ... done 08:24:13 Stopping policy-apex-pdp ... done 08:24:23 Stopping simulator ... done 08:24:23 Stopping policy-pap ... done 08:24:24 Stopping mariadb ... done 08:24:24 Stopping kafka ... done 08:24:25 Stopping zookeeper ... done 08:24:33 Stopping policy-api ... done 08:24:34 Removing policy-apex-pdp ... 08:24:34 Removing grafana ... 08:24:34 Removing policy-pap ... 08:24:34 Removing kafka ... 08:24:34 Removing policy-api ... 08:24:34 Removing policy-db-migrator ... 08:24:34 Removing zookeeper ... 08:24:34 Removing simulator ... 08:24:34 Removing prometheus ... 08:24:34 Removing mariadb ... 08:24:34 Removing grafana ... done 08:24:34 Removing mariadb ... done 08:24:34 Removing simulator ... done 08:24:34 Removing policy-api ... done 08:24:34 Removing policy-apex-pdp ... done 08:24:34 Removing policy-db-migrator ... done 08:24:34 Removing kafka ... done 08:24:34 Removing prometheus ... done 08:24:34 Removing zookeeper ... done 08:24:34 Removing policy-pap ... done 08:24:34 Removing network compose_default 08:24:34 ++ cd /w/workspace/policy-pap-master-project-csit-pap 08:24:34 + load_set 08:24:34 + _setopts=hxB 08:24:34 ++ echo braceexpand:hashall:interactive-comments:xtrace 08:24:34 ++ tr : ' ' 08:24:34 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 08:24:34 + set +o braceexpand 08:24:34 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 08:24:34 + set +o hashall 08:24:34 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 08:24:34 + set +o interactive-comments 08:24:34 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 08:24:34 + set +o xtrace 08:24:34 ++ echo hxB 08:24:34 ++ sed 's/./& /g' 08:24:34 + for i in $(echo "$_setopts" | sed 's/./& /g') 08:24:34 + set +h 08:24:34 + for i in $(echo "$_setopts" | sed 's/./& /g') 08:24:34 + set +x 08:24:34 + rsync /w/workspace/policy-pap-master-project-csit-pap/compose/docker_compose.log /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap 08:24:34 + [[ -n /tmp/tmp.2rpNlazw2W ]] 08:24:34 + rsync -av /tmp/tmp.2rpNlazw2W/ /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap 08:24:34 sending incremental file list 08:24:34 ./ 08:24:34 log.html 08:24:34 output.xml 08:24:34 report.html 08:24:34 testplan.txt 08:24:34 08:24:34 sent 918,526 bytes received 95 bytes 1,837,242.00 bytes/sec 08:24:34 total size is 917,984 speedup is 1.00 08:24:34 + rm -rf /w/workspace/policy-pap-master-project-csit-pap/models 08:24:34 + exit 1 08:24:34 Build step 'Execute shell' marked build as failure 08:24:34 $ ssh-agent -k 08:24:34 unset SSH_AUTH_SOCK; 08:24:34 unset SSH_AGENT_PID; 08:24:34 echo Agent pid 2142 killed; 08:24:34 [ssh-agent] Stopped. 08:24:34 Robot results publisher started... 08:24:34 INFO: Checking test criticality is deprecated and will be dropped in a future release! 08:24:34 -Parsing output xml: 08:24:34 Done! 08:24:34 WARNING! Could not find file: **/log.html 08:24:34 WARNING! Could not find file: **/report.html 08:24:34 -Copying log files to build dir: 08:24:35 Done! 08:24:35 -Assigning results to build: 08:24:35 Done! 08:24:35 -Checking thresholds: 08:24:35 Done! 08:24:35 Done publishing Robot results. 08:24:35 [PostBuildScript] - [INFO] Executing post build scripts. 08:24:35 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins4067026727382834406.sh 08:24:35 ---> sysstat.sh 08:24:35 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins17191936694961827262.sh 08:24:35 ---> package-listing.sh 08:24:35 ++ facter osfamily 08:24:35 ++ tr '[:upper:]' '[:lower:]' 08:24:35 + OS_FAMILY=debian 08:24:35 + workspace=/w/workspace/policy-pap-master-project-csit-pap 08:24:35 + START_PACKAGES=/tmp/packages_start.txt 08:24:35 + END_PACKAGES=/tmp/packages_end.txt 08:24:35 + DIFF_PACKAGES=/tmp/packages_diff.txt 08:24:35 + PACKAGES=/tmp/packages_start.txt 08:24:35 + '[' /w/workspace/policy-pap-master-project-csit-pap ']' 08:24:35 + PACKAGES=/tmp/packages_end.txt 08:24:35 + case "${OS_FAMILY}" in 08:24:35 + dpkg -l 08:24:35 + grep '^ii' 08:24:35 + '[' -f /tmp/packages_start.txt ']' 08:24:35 + '[' -f /tmp/packages_end.txt ']' 08:24:35 + diff /tmp/packages_start.txt /tmp/packages_end.txt 08:24:35 + '[' /w/workspace/policy-pap-master-project-csit-pap ']' 08:24:35 + mkdir -p /w/workspace/policy-pap-master-project-csit-pap/archives/ 08:24:35 + cp -f /tmp/packages_diff.txt /tmp/packages_end.txt /tmp/packages_start.txt /w/workspace/policy-pap-master-project-csit-pap/archives/ 08:24:35 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins8889886100071594191.sh 08:24:35 ---> capture-instance-metadata.sh 08:24:35 Setup pyenv: 08:24:35 system 08:24:35 3.8.13 08:24:35 3.9.13 08:24:35 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) 08:24:36 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-WerH from file:/tmp/.os_lf_venv 08:24:37 lf-activate-venv(): INFO: Installing: lftools 08:24:46 lf-activate-venv(): INFO: Adding /tmp/venv-WerH/bin to PATH 08:24:46 INFO: Running in OpenStack, capturing instance metadata 08:24:47 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins18403376440070541822.sh 08:24:47 provisioning config files... 08:24:47 copy managed file [jenkins-log-archives-settings] to file:/w/workspace/policy-pap-master-project-csit-pap@tmp/config3618601703336951193tmp 08:24:47 Regular expression run condition: Expression=[^.*logs-s3.*], Label=[] 08:24:47 Run condition [Regular expression match] preventing perform for step [Provide Configuration files] 08:24:47 [EnvInject] - Injecting environment variables from a build step. 08:24:47 [EnvInject] - Injecting as environment variables the properties content 08:24:47 SERVER_ID=logs 08:24:47 08:24:47 [EnvInject] - Variables injected successfully. 08:24:47 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins4329657085612354107.sh 08:24:47 ---> create-netrc.sh 08:24:47 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins15167466917401396788.sh 08:24:47 ---> python-tools-install.sh 08:24:47 Setup pyenv: 08:24:47 system 08:24:47 3.8.13 08:24:47 3.9.13 08:24:47 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) 08:24:47 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-WerH from file:/tmp/.os_lf_venv 08:24:48 lf-activate-venv(): INFO: Installing: lftools 08:24:57 lf-activate-venv(): INFO: Adding /tmp/venv-WerH/bin to PATH 08:24:57 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins14339797825566615824.sh 08:24:57 ---> sudo-logs.sh 08:24:57 Archiving 'sudo' log.. 08:24:57 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins8218995366673184191.sh 08:24:57 ---> job-cost.sh 08:24:57 Setup pyenv: 08:24:57 system 08:24:57 3.8.13 08:24:57 3.9.13 08:24:57 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) 08:24:57 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-WerH from file:/tmp/.os_lf_venv 08:24:58 lf-activate-venv(): INFO: Installing: zipp==1.1.0 python-openstackclient urllib3~=1.26.15 08:25:03 lf-activate-venv(): INFO: Adding /tmp/venv-WerH/bin to PATH 08:25:03 INFO: No Stack... 08:25:04 INFO: Retrieving Pricing Info for: v3-standard-8 08:25:04 INFO: Archiving Costs 08:25:04 [policy-pap-master-project-csit-pap] $ /bin/bash -l /tmp/jenkins154775296268774014.sh 08:25:04 ---> logs-deploy.sh 08:25:04 Setup pyenv: 08:25:04 system 08:25:04 3.8.13 08:25:04 3.9.13 08:25:04 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) 08:25:04 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-WerH from file:/tmp/.os_lf_venv 08:25:06 lf-activate-venv(): INFO: Installing: lftools 08:25:14 lf-activate-venv(): INFO: Adding /tmp/venv-WerH/bin to PATH 08:25:14 INFO: Nexus URL https://nexus.onap.org path production/vex-yul-ecomp-jenkins-1/policy-pap-master-project-csit-pap/1665 08:25:14 INFO: archiving workspace using pattern(s): -p **/target/surefire-reports/*-output.txt 08:25:15 Archives upload complete. 08:25:15 INFO: archiving logs to Nexus 08:25:16 ---> uname -a: 08:25:16 Linux prd-ubuntu1804-docker-8c-8g-35271 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux 08:25:16 08:25:16 08:25:16 ---> lscpu: 08:25:16 Architecture: x86_64 08:25:16 CPU op-mode(s): 32-bit, 64-bit 08:25:16 Byte Order: Little Endian 08:25:16 CPU(s): 8 08:25:16 On-line CPU(s) list: 0-7 08:25:16 Thread(s) per core: 1 08:25:16 Core(s) per socket: 1 08:25:16 Socket(s): 8 08:25:16 NUMA node(s): 1 08:25:16 Vendor ID: AuthenticAMD 08:25:16 CPU family: 23 08:25:16 Model: 49 08:25:16 Model name: AMD EPYC-Rome Processor 08:25:16 Stepping: 0 08:25:16 CPU MHz: 2799.996 08:25:16 BogoMIPS: 5599.99 08:25:16 Virtualization: AMD-V 08:25:16 Hypervisor vendor: KVM 08:25:16 Virtualization type: full 08:25:16 L1d cache: 32K 08:25:16 L1i cache: 32K 08:25:16 L2 cache: 512K 08:25:16 L3 cache: 16384K 08:25:16 NUMA node0 CPU(s): 0-7 08:25:16 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl xtopology cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr arat npt nrip_save umip rdpid arch_capabilities 08:25:16 08:25:16 08:25:16 ---> nproc: 08:25:16 8 08:25:16 08:25:16 08:25:16 ---> df -h: 08:25:16 Filesystem Size Used Avail Use% Mounted on 08:25:16 udev 16G 0 16G 0% /dev 08:25:16 tmpfs 3.2G 708K 3.2G 1% /run 08:25:16 /dev/vda1 155G 14G 142G 9% / 08:25:16 tmpfs 16G 0 16G 0% /dev/shm 08:25:16 tmpfs 5.0M 0 5.0M 0% /run/lock 08:25:16 tmpfs 16G 0 16G 0% /sys/fs/cgroup 08:25:16 /dev/vda15 105M 4.4M 100M 5% /boot/efi 08:25:16 tmpfs 3.2G 0 3.2G 0% /run/user/1001 08:25:16 08:25:16 08:25:16 ---> free -m: 08:25:16 total used free shared buff/cache available 08:25:16 Mem: 32167 851 25369 0 5945 30859 08:25:16 Swap: 1023 0 1023 08:25:16 08:25:16 08:25:16 ---> ip addr: 08:25:16 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 08:25:16 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 08:25:16 inet 127.0.0.1/8 scope host lo 08:25:16 valid_lft forever preferred_lft forever 08:25:16 inet6 ::1/128 scope host 08:25:16 valid_lft forever preferred_lft forever 08:25:16 2: ens3: mtu 1458 qdisc mq state UP group default qlen 1000 08:25:16 link/ether fa:16:3e:6b:82:b2 brd ff:ff:ff:ff:ff:ff 08:25:16 inet 10.30.106.106/23 brd 10.30.107.255 scope global dynamic ens3 08:25:16 valid_lft 85945sec preferred_lft 85945sec 08:25:16 inet6 fe80::f816:3eff:fe6b:82b2/64 scope link 08:25:16 valid_lft forever preferred_lft forever 08:25:16 3: docker0: mtu 1500 qdisc noqueue state DOWN group default 08:25:16 link/ether 02:42:3c:48:d6:94 brd ff:ff:ff:ff:ff:ff 08:25:16 inet 10.250.0.254/24 brd 10.250.0.255 scope global docker0 08:25:16 valid_lft forever preferred_lft forever 08:25:16 08:25:16 08:25:16 ---> sar -b -r -n DEV: 08:25:16 Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-35271) 04/26/24 _x86_64_ (8 CPU) 08:25:16 08:25:16 08:17:44 LINUX RESTART (8 CPU) 08:25:16 08:25:16 08:18:01 tps rtps wtps bread/s bwrtn/s 08:25:16 08:19:01 167.22 87.19 80.04 6257.49 50650.22 08:25:16 08:20:01 99.30 13.93 85.37 1141.14 19361.04 08:25:16 08:21:01 169.42 9.83 159.59 1740.91 49454.56 08:25:16 08:22:01 427.80 13.26 414.53 788.94 106560.51 08:25:16 08:23:01 22.85 0.25 22.60 12.53 9963.51 08:25:16 08:24:01 10.68 0.02 10.66 1.60 9394.93 08:25:16 08:25:01 75.55 1.40 74.15 107.45 13016.13 08:25:16 Average: 138.97 17.98 120.99 1435.72 36914.41 08:25:16 08:25:16 08:18:01 kbmemfree kbavail kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty 08:25:16 08:19:01 30157872 31666680 2781348 8.44 58784 1766956 1483968 4.37 916924 1587080 128512 08:25:16 08:20:01 29849028 31687596 3090192 9.38 84816 2050908 1424120 4.19 889912 1877312 179256 08:25:16 08:21:01 26014524 31638512 6924696 21.02 135796 5635000 1583232 4.66 1046852 5370456 2652484 08:25:16 08:22:01 24186888 29957804 8752332 26.57 154516 5734704 8217276 24.18 2891412 5270172 636 08:25:16 08:23:01 23929912 29706352 9009308 27.35 156044 5736476 8644824 25.44 3160748 5250660 488 08:25:16 08:24:01 23874624 29679028 9064596 27.52 156164 5763484 8730212 25.69 3198584 5266236 26024 08:25:16 08:25:01 25985136 31604912 6954084 21.11 157864 5596508 1534524 4.51 1311452 5108832 1484 08:25:16 Average: 26285426 30848698 6653794 20.20 129141 4612005 4516879 13.29 1916555 4247250 426983 08:25:16 08:25:16 08:18:01 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil 08:25:16 08:19:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 08:25:16 08:19:01 lo 1.87 1.87 0.19 0.19 0.00 0.00 0.00 0.00 08:25:16 08:19:01 ens3 393.52 257.97 1499.45 60.62 0.00 0.00 0.00 0.00 08:25:16 08:20:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 08:25:16 08:20:01 lo 1.60 1.60 0.17 0.17 0.00 0.00 0.00 0.00 08:25:16 08:20:01 ens3 51.36 36.63 713.26 8.38 0.00 0.00 0.00 0.00 08:25:16 08:21:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 08:25:16 08:21:01 lo 12.60 12.60 1.22 1.22 0.00 0.00 0.00 0.00 08:25:16 08:21:01 ens3 1022.73 535.39 29597.63 39.46 0.00 0.00 0.00 0.00 08:25:16 08:21:01 br-5ea65bf9defe 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 08:25:16 08:22:01 vethd372ab8 0.70 0.87 0.05 0.05 0.00 0.00 0.00 0.00 08:25:16 08:22:01 veth2fa051b 0.00 0.30 0.00 0.02 0.00 0.00 0.00 0.00 08:25:16 08:22:01 veth881873e 0.15 0.45 0.01 0.02 0.00 0.00 0.00 0.00 08:25:16 08:22:01 vethce41c7a 2.33 2.32 0.19 0.19 0.00 0.00 0.00 0.00 08:25:16 08:23:01 vethd372ab8 4.07 5.35 0.81 0.53 0.00 0.00 0.00 0.00 08:25:16 08:23:01 veth2fa051b 0.00 0.05 0.00 0.00 0.00 0.00 0.00 0.00 08:25:16 08:23:01 veth881873e 0.53 0.52 0.05 1.48 0.00 0.00 0.00 0.00 08:25:16 08:23:01 vethce41c7a 48.86 44.13 15.27 39.27 0.00 0.00 0.00 0.00 08:25:16 08:24:01 vethd372ab8 3.18 4.67 0.66 0.36 0.00 0.00 0.00 0.00 08:25:16 08:24:01 veth2fa051b 0.00 0.03 0.00 0.00 0.00 0.00 0.00 0.00 08:25:16 08:24:01 veth881873e 1.03 1.28 0.12 1.61 0.00 0.00 0.00 0.00 08:25:16 08:24:01 vethce41c7a 6.98 9.95 1.64 0.75 0.00 0.00 0.00 0.00 08:25:16 08:25:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 08:25:16 08:25:01 lo 34.73 34.73 6.21 6.21 0.00 0.00 0.00 0.00 08:25:16 08:25:01 ens3 1572.70 902.37 31896.71 147.51 0.00 0.00 0.00 0.00 08:25:16 Average: docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 08:25:16 Average: lo 4.53 4.53 0.85 0.85 0.00 0.00 0.00 0.00 08:25:16 Average: ens3 223.82 128.03 4555.90 20.97 0.00 0.00 0.00 0.00 08:25:16 08:25:16 08:25:16 ---> sar -P ALL: 08:25:16 Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-35271) 04/26/24 _x86_64_ (8 CPU) 08:25:16 08:25:16 08:17:44 LINUX RESTART (8 CPU) 08:25:16 08:25:16 08:18:01 CPU %user %nice %system %iowait %steal %idle 08:25:16 08:19:01 all 9.58 0.00 1.32 3.57 0.04 85.49 08:25:16 08:19:01 0 4.96 0.00 1.00 0.80 0.03 93.20 08:25:16 08:19:01 1 2.92 0.00 0.75 0.27 0.07 96.00 08:25:16 08:19:01 2 6.06 0.00 0.88 0.77 0.03 92.25 08:25:16 08:19:01 3 9.74 0.00 1.44 1.32 0.03 87.47 08:25:16 08:19:01 4 11.17 0.00 1.34 2.62 0.02 84.85 08:25:16 08:19:01 5 25.90 0.00 1.79 2.35 0.05 69.91 08:25:16 08:19:01 6 11.82 0.00 2.38 0.37 0.03 85.39 08:25:16 08:19:01 7 4.07 0.00 0.99 20.10 0.05 74.80 08:25:16 08:20:01 all 10.57 0.00 0.67 1.82 0.03 86.91 08:25:16 08:20:01 0 7.91 0.00 0.79 0.35 0.02 90.93 08:25:16 08:20:01 1 13.19 0.00 0.65 1.87 0.03 84.25 08:25:16 08:20:01 2 5.22 0.00 0.48 0.15 0.02 94.13 08:25:16 08:20:01 3 5.25 0.00 0.22 0.43 0.02 94.08 08:25:16 08:20:01 4 10.86 0.00 0.87 2.45 0.03 85.78 08:25:16 08:20:01 5 29.86 0.00 1.40 1.13 0.05 67.56 08:25:16 08:20:01 6 5.95 0.00 0.33 0.00 0.02 93.70 08:25:16 08:20:01 7 6.38 0.00 0.58 8.18 0.07 84.79 08:25:16 08:21:01 all 13.43 0.00 5.62 4.48 0.08 76.39 08:25:16 08:21:01 0 14.49 0.00 4.61 3.55 0.10 77.24 08:25:16 08:21:01 1 15.18 0.00 4.77 4.88 0.09 75.09 08:25:16 08:21:01 2 13.30 0.00 5.73 2.29 0.10 78.58 08:25:16 08:21:01 3 12.64 0.00 5.84 1.54 0.05 79.93 08:25:16 08:21:01 4 16.86 0.00 6.18 11.55 0.07 65.34 08:25:16 08:21:01 5 14.48 0.00 5.36 7.61 0.07 72.48 08:25:16 08:21:01 6 11.37 0.00 5.97 0.17 0.07 82.42 08:25:16 08:21:01 7 9.13 0.00 6.45 4.28 0.07 80.08 08:25:16 08:22:01 all 22.69 0.00 3.85 7.71 0.08 65.67 08:25:16 08:22:01 0 26.77 0.00 3.78 7.64 0.08 61.73 08:25:16 08:22:01 1 11.39 0.00 3.56 3.47 0.07 81.52 08:25:16 08:22:01 2 19.09 0.00 3.27 4.97 0.10 72.57 08:25:16 08:22:01 3 23.96 0.00 4.98 38.13 0.10 32.83 08:25:16 08:22:01 4 26.80 0.00 3.86 2.92 0.08 66.33 08:25:16 08:22:01 5 19.36 0.00 3.74 1.84 0.08 74.97 08:25:16 08:22:01 6 31.47 0.00 4.62 1.30 0.08 62.53 08:25:16 08:22:01 7 22.71 0.00 3.01 1.63 0.08 72.57 08:25:16 08:23:01 all 10.05 0.00 0.98 0.60 0.06 88.31 08:25:16 08:23:01 0 10.43 0.00 1.09 0.00 0.03 88.45 08:25:16 08:23:01 1 10.05 0.00 1.03 0.03 0.05 88.84 08:25:16 08:23:01 2 9.69 0.00 0.72 0.00 0.07 89.52 08:25:16 08:23:01 3 9.79 0.00 0.90 4.64 0.07 84.60 08:25:16 08:23:01 4 10.29 0.00 0.87 0.02 0.08 88.74 08:25:16 08:23:01 5 9.72 0.00 1.04 0.00 0.07 89.18 08:25:16 08:23:01 6 11.21 0.00 1.20 0.02 0.05 87.52 08:25:16 08:23:01 7 9.19 0.00 0.99 0.07 0.08 89.67 08:25:16 08:24:01 all 1.01 0.00 0.26 0.63 0.04 98.06 08:25:16 08:24:01 0 1.04 0.00 0.32 0.00 0.07 98.58 08:25:16 08:24:01 1 1.35 0.00 0.28 0.05 0.03 98.28 08:25:16 08:24:01 2 1.40 0.00 0.32 0.02 0.03 98.23 08:25:16 08:24:01 3 0.75 0.00 0.22 4.77 0.03 94.23 08:25:16 08:24:01 4 1.84 0.00 0.27 0.10 0.05 97.74 08:25:16 08:24:01 5 0.40 0.00 0.22 0.07 0.03 99.28 08:25:16 08:24:01 6 0.78 0.00 0.23 0.00 0.03 98.95 08:25:16 08:24:01 7 0.48 0.00 0.25 0.03 0.03 99.20 08:25:16 08:25:01 all 5.78 0.00 0.69 0.90 0.04 92.60 08:25:16 08:25:01 0 16.91 0.00 1.12 0.45 0.03 81.49 08:25:16 08:25:01 1 5.12 0.00 0.53 0.15 0.05 94.14 08:25:16 08:25:01 2 3.69 0.00 0.57 1.29 0.03 94.43 08:25:16 08:25:01 3 13.72 0.00 0.75 4.45 0.05 81.03 08:25:16 08:25:01 4 1.12 0.00 0.55 0.25 0.03 98.04 08:25:16 08:25:01 5 1.21 0.00 0.67 0.20 0.03 97.89 08:25:16 08:25:01 6 3.62 0.00 0.73 0.27 0.03 95.34 08:25:16 08:25:01 7 0.80 0.00 0.62 0.13 0.03 98.41 08:25:16 Average: all 10.42 0.00 1.90 2.81 0.05 84.81 08:25:16 Average: 0 11.77 0.00 1.81 1.82 0.05 84.55 08:25:16 Average: 1 8.43 0.00 1.64 1.52 0.06 88.35 08:25:16 Average: 2 8.33 0.00 1.70 1.35 0.06 88.57 08:25:16 Average: 3 10.80 0.00 2.04 7.85 0.05 79.26 08:25:16 Average: 4 11.26 0.00 1.98 2.82 0.05 83.89 08:25:16 Average: 5 14.42 0.00 2.02 1.87 0.06 81.63 08:25:16 Average: 6 10.86 0.00 2.20 0.30 0.05 86.59 08:25:16 Average: 7 7.52 0.00 1.83 4.92 0.06 85.68 08:25:16 08:25:16 08:25:16