14:08:04 Started by upstream project "policy-docker-master-merge-java" build number 346 14:08:04 originally caused by: 14:08:04 Triggered by Gerrit: https://gerrit.onap.org/r/c/policy/docker/+/137652 14:08:04 Running as SYSTEM 14:08:04 [EnvInject] - Loading node environment variables. 14:08:04 Building remotely on prd-ubuntu1804-docker-8c-8g-21829 (ubuntu1804-docker-8c-8g) in workspace /w/workspace/policy-pap-master-project-csit-pap 14:08:04 [ssh-agent] Looking for ssh-agent implementation... 14:08:04 [ssh-agent] Exec ssh-agent (binary ssh-agent on a remote machine) 14:08:04 $ ssh-agent 14:08:04 SSH_AUTH_SOCK=/tmp/ssh-JsAABIdVHeJf/agent.2079 14:08:04 SSH_AGENT_PID=2081 14:08:04 [ssh-agent] Started. 14:08:04 Running ssh-add (command line suppressed) 14:08:04 Identity added: /w/workspace/policy-pap-master-project-csit-pap@tmp/private_key_9891480896115039705.key (/w/workspace/policy-pap-master-project-csit-pap@tmp/private_key_9891480896115039705.key) 14:08:04 [ssh-agent] Using credentials onap-jobbuiler (Gerrit user) 14:08:04 The recommended git tool is: NONE 14:08:06 using credential onap-jenkins-ssh 14:08:06 Wiping out workspace first. 14:08:06 Cloning the remote Git repository 14:08:06 Cloning repository git://cloud.onap.org/mirror/policy/docker.git 14:08:06 > git init /w/workspace/policy-pap-master-project-csit-pap # timeout=10 14:08:06 Fetching upstream changes from git://cloud.onap.org/mirror/policy/docker.git 14:08:06 > git --version # timeout=10 14:08:06 > git --version # 'git version 2.17.1' 14:08:06 using GIT_SSH to set credentials Gerrit user 14:08:06 Verifying host key using manually-configured host key entries 14:08:06 > git fetch --tags --progress -- git://cloud.onap.org/mirror/policy/docker.git +refs/heads/*:refs/remotes/origin/* # timeout=30 14:08:06 > git config remote.origin.url git://cloud.onap.org/mirror/policy/docker.git # timeout=10 14:08:06 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10 14:08:07 Avoid second fetch 14:08:07 > git rev-parse refs/remotes/origin/master^{commit} # timeout=10 14:08:07 Checking out Revision c5936fb131831992ac8da40fb56599dfb0ae1b5e (refs/remotes/origin/master) 14:08:07 > git config core.sparsecheckout # timeout=10 14:08:07 > git checkout -f c5936fb131831992ac8da40fb56599dfb0ae1b5e # timeout=30 14:08:07 Commit message: "Disable drools pdp test in CSIT until drools is fixed" 14:08:07 > git rev-list --no-walk cebb4172163dc04b43be7e34d9a4b374370492f8 # timeout=10 14:08:07 provisioning config files... 14:08:07 copy managed file [npmrc] to file:/home/jenkins/.npmrc 14:08:07 copy managed file [pipconf] to file:/home/jenkins/.config/pip/pip.conf 14:08:07 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins8462579616346211994.sh 14:08:07 ---> python-tools-install.sh 14:08:07 Setup pyenv: 14:08:07 * system (set by /opt/pyenv/version) 14:08:07 * 3.8.13 (set by /opt/pyenv/version) 14:08:07 * 3.9.13 (set by /opt/pyenv/version) 14:08:07 * 3.10.6 (set by /opt/pyenv/version) 14:08:12 lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-3z0W 14:08:12 lf-activate-venv(): INFO: Save venv in file: /tmp/.os_lf_venv 14:08:15 lf-activate-venv(): INFO: Installing: lftools 14:08:51 lf-activate-venv(): INFO: Adding /tmp/venv-3z0W/bin to PATH 14:08:51 Generating Requirements File 14:09:30 Python 3.10.6 14:09:30 pip 24.0 from /tmp/venv-3z0W/lib/python3.10/site-packages/pip (python 3.10) 14:09:30 appdirs==1.4.4 14:09:30 argcomplete==3.2.3 14:09:30 aspy.yaml==1.3.0 14:09:30 attrs==23.2.0 14:09:30 autopage==0.5.2 14:09:30 beautifulsoup4==4.12.3 14:09:30 boto3==1.34.80 14:09:30 botocore==1.34.80 14:09:30 bs4==0.0.2 14:09:30 cachetools==5.3.3 14:09:30 certifi==2024.2.2 14:09:30 cffi==1.16.0 14:09:30 cfgv==3.4.0 14:09:30 chardet==5.2.0 14:09:30 charset-normalizer==3.3.2 14:09:30 click==8.1.7 14:09:30 cliff==4.6.0 14:09:30 cmd2==2.4.3 14:09:30 cryptography==3.3.2 14:09:30 debtcollector==3.0.0 14:09:30 decorator==5.1.1 14:09:30 defusedxml==0.7.1 14:09:30 Deprecated==1.2.14 14:09:30 distlib==0.3.8 14:09:30 dnspython==2.6.1 14:09:30 docker==4.2.2 14:09:30 dogpile.cache==1.3.2 14:09:30 email_validator==2.1.1 14:09:30 filelock==3.13.3 14:09:30 future==1.0.0 14:09:30 gitdb==4.0.11 14:09:30 GitPython==3.1.43 14:09:30 google-auth==2.29.0 14:09:30 httplib2==0.22.0 14:09:30 identify==2.5.35 14:09:30 idna==3.6 14:09:30 importlib-resources==1.5.0 14:09:30 iso8601==2.1.0 14:09:30 Jinja2==3.1.3 14:09:30 jmespath==1.0.1 14:09:30 jsonpatch==1.33 14:09:30 jsonpointer==2.4 14:09:30 jsonschema==4.21.1 14:09:30 jsonschema-specifications==2023.12.1 14:09:30 keystoneauth1==5.6.0 14:09:30 kubernetes==29.0.0 14:09:30 lftools==0.37.10 14:09:30 lxml==5.2.1 14:09:30 MarkupSafe==2.1.5 14:09:30 msgpack==1.0.8 14:09:30 multi_key_dict==2.0.3 14:09:30 munch==4.0.0 14:09:30 netaddr==1.2.1 14:09:30 netifaces==0.11.0 14:09:30 niet==1.4.2 14:09:30 nodeenv==1.8.0 14:09:30 oauth2client==4.1.3 14:09:30 oauthlib==3.2.2 14:09:30 openstacksdk==3.0.0 14:09:30 os-client-config==2.1.0 14:09:30 os-service-types==1.7.0 14:09:30 osc-lib==3.0.1 14:09:30 oslo.config==9.4.0 14:09:30 oslo.context==5.5.0 14:09:30 oslo.i18n==6.3.0 14:09:30 oslo.log==5.5.1 14:09:30 oslo.serialization==5.4.0 14:09:30 oslo.utils==7.1.0 14:09:30 packaging==24.0 14:09:30 pbr==6.0.0 14:09:30 platformdirs==4.2.0 14:09:30 prettytable==3.10.0 14:09:30 pyasn1==0.6.0 14:09:30 pyasn1_modules==0.4.0 14:09:30 pycparser==2.22 14:09:30 pygerrit2==2.0.15 14:09:30 PyGithub==2.3.0 14:09:30 pyinotify==0.9.6 14:09:30 PyJWT==2.8.0 14:09:30 PyNaCl==1.5.0 14:09:30 pyparsing==2.4.7 14:09:30 pyperclip==1.8.2 14:09:30 pyrsistent==0.20.0 14:09:30 python-cinderclient==9.5.0 14:09:30 python-dateutil==2.9.0.post0 14:09:30 python-heatclient==3.5.0 14:09:30 python-jenkins==1.8.2 14:09:30 python-keystoneclient==5.4.0 14:09:30 python-magnumclient==4.4.0 14:09:30 python-novaclient==18.6.0 14:09:30 python-openstackclient==6.6.0 14:09:30 python-swiftclient==4.5.0 14:09:30 PyYAML==6.0.1 14:09:30 referencing==0.34.0 14:09:30 requests==2.31.0 14:09:30 requests-oauthlib==2.0.0 14:09:30 requestsexceptions==1.4.0 14:09:30 rfc3986==2.0.0 14:09:30 rpds-py==0.18.0 14:09:30 rsa==4.9 14:09:30 ruamel.yaml==0.18.6 14:09:30 ruamel.yaml.clib==0.2.8 14:09:30 s3transfer==0.10.1 14:09:30 simplejson==3.19.2 14:09:30 six==1.16.0 14:09:30 smmap==5.0.1 14:09:30 soupsieve==2.5 14:09:30 stevedore==5.2.0 14:09:30 tabulate==0.9.0 14:09:30 toml==0.10.2 14:09:30 tomlkit==0.12.4 14:09:30 tqdm==4.66.2 14:09:30 typing_extensions==4.11.0 14:09:30 tzdata==2024.1 14:09:30 urllib3==1.26.18 14:09:30 virtualenv==20.25.1 14:09:30 wcwidth==0.2.13 14:09:30 websocket-client==1.7.0 14:09:30 wrapt==1.16.0 14:09:30 xdg==6.0.0 14:09:30 xmltodict==0.13.0 14:09:30 yq==3.2.3 14:09:30 [EnvInject] - Injecting environment variables from a build step. 14:09:31 [EnvInject] - Injecting as environment variables the properties content 14:09:31 SET_JDK_VERSION=openjdk17 14:09:31 GIT_URL="git://cloud.onap.org/mirror" 14:09:31 14:09:31 [EnvInject] - Variables injected successfully. 14:09:31 [policy-pap-master-project-csit-pap] $ /bin/sh /tmp/jenkins7988746917834826800.sh 14:09:31 ---> update-java-alternatives.sh 14:09:31 ---> Updating Java version 14:09:31 ---> Ubuntu/Debian system detected 14:09:31 update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/java to provide /usr/bin/java (java) in manual mode 14:09:31 update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/javac to provide /usr/bin/javac (javac) in manual mode 14:09:31 update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64 to provide /usr/lib/jvm/java-openjdk (java_sdk_openjdk) in manual mode 14:09:31 openjdk version "17.0.4" 2022-07-19 14:09:31 OpenJDK Runtime Environment (build 17.0.4+8-Ubuntu-118.04) 14:09:31 OpenJDK 64-Bit Server VM (build 17.0.4+8-Ubuntu-118.04, mixed mode, sharing) 14:09:31 JAVA_HOME=/usr/lib/jvm/java-17-openjdk-amd64 14:09:31 [EnvInject] - Injecting environment variables from a build step. 14:09:31 [EnvInject] - Injecting as environment variables the properties file path '/tmp/java.env' 14:09:31 [EnvInject] - Variables injected successfully. 14:09:31 [policy-pap-master-project-csit-pap] $ /bin/sh -xe /tmp/jenkins12946586964289018512.sh 14:09:31 + /w/workspace/policy-pap-master-project-csit-pap/csit/run-project-csit.sh pap 14:09:31 + set +u 14:09:31 + save_set 14:09:31 + RUN_CSIT_SAVE_SET=ehxB 14:09:31 + RUN_CSIT_SHELLOPTS=braceexpand:errexit:hashall:interactive-comments:pipefail:xtrace 14:09:31 + '[' 1 -eq 0 ']' 14:09:31 + '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' 14:09:31 + export PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 14:09:31 + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 14:09:31 + export SCRIPTS=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts 14:09:31 + SCRIPTS=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts 14:09:31 + export ROBOT_VARIABLES= 14:09:31 + ROBOT_VARIABLES= 14:09:31 + export PROJECT=pap 14:09:31 + PROJECT=pap 14:09:31 + cd /w/workspace/policy-pap-master-project-csit-pap 14:09:31 + rm -rf /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap 14:09:31 + mkdir -p /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap 14:09:31 + source_safely /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/prepare-robot-env.sh 14:09:31 + '[' -z /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/prepare-robot-env.sh ']' 14:09:31 + relax_set 14:09:31 + set +e 14:09:31 + set +o pipefail 14:09:31 + . /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/prepare-robot-env.sh 14:09:31 ++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' 14:09:31 +++ mktemp -d 14:09:31 ++ ROBOT_VENV=/tmp/tmp.f2xXiAH3bV 14:09:31 ++ echo ROBOT_VENV=/tmp/tmp.f2xXiAH3bV 14:09:31 +++ python3 --version 14:09:31 ++ echo 'Python version is: Python 3.6.9' 14:09:31 Python version is: Python 3.6.9 14:09:31 ++ python3 -m venv --clear /tmp/tmp.f2xXiAH3bV 14:09:33 ++ source /tmp/tmp.f2xXiAH3bV/bin/activate 14:09:33 +++ deactivate nondestructive 14:09:33 +++ '[' -n '' ']' 14:09:33 +++ '[' -n '' ']' 14:09:33 +++ '[' -n /bin/bash -o -n '' ']' 14:09:33 +++ hash -r 14:09:33 +++ '[' -n '' ']' 14:09:33 +++ unset VIRTUAL_ENV 14:09:33 +++ '[' '!' nondestructive = nondestructive ']' 14:09:33 +++ VIRTUAL_ENV=/tmp/tmp.f2xXiAH3bV 14:09:33 +++ export VIRTUAL_ENV 14:09:33 +++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 14:09:33 +++ PATH=/tmp/tmp.f2xXiAH3bV/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 14:09:33 +++ export PATH 14:09:33 +++ '[' -n '' ']' 14:09:33 +++ '[' -z '' ']' 14:09:33 +++ _OLD_VIRTUAL_PS1= 14:09:33 +++ '[' 'x(tmp.f2xXiAH3bV) ' '!=' x ']' 14:09:33 +++ PS1='(tmp.f2xXiAH3bV) ' 14:09:33 +++ export PS1 14:09:33 +++ '[' -n /bin/bash -o -n '' ']' 14:09:33 +++ hash -r 14:09:33 ++ set -exu 14:09:33 ++ python3 -m pip install -qq --upgrade 'pip<=23.0' 'setuptools<=66.1.1' 14:09:36 ++ echo 'Installing Python Requirements' 14:09:36 Installing Python Requirements 14:09:36 ++ python3 -m pip install -qq -r /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/pylibs.txt 14:09:56 ++ python3 -m pip -qq freeze 14:09:56 bcrypt==4.0.1 14:09:56 beautifulsoup4==4.12.3 14:09:56 bitarray==2.9.2 14:09:56 certifi==2024.2.2 14:09:56 cffi==1.15.1 14:09:56 charset-normalizer==2.0.12 14:09:56 cryptography==40.0.2 14:09:56 decorator==5.1.1 14:09:56 elasticsearch==7.17.9 14:09:56 elasticsearch-dsl==7.4.1 14:09:56 enum34==1.1.10 14:09:56 idna==3.6 14:09:56 importlib-resources==5.4.0 14:09:56 ipaddr==2.2.0 14:09:56 isodate==0.6.1 14:09:56 jmespath==0.10.0 14:09:56 jsonpatch==1.32 14:09:56 jsonpath-rw==1.4.0 14:09:56 jsonpointer==2.3 14:09:56 lxml==5.2.1 14:09:56 netaddr==0.8.0 14:09:56 netifaces==0.11.0 14:09:56 odltools==0.1.28 14:09:56 paramiko==3.4.0 14:09:56 pkg_resources==0.0.0 14:09:56 ply==3.11 14:09:56 pyang==2.6.0 14:09:56 pyangbind==0.8.1 14:09:56 pycparser==2.21 14:09:56 pyhocon==0.3.60 14:09:56 PyNaCl==1.5.0 14:09:56 pyparsing==3.1.2 14:09:56 python-dateutil==2.9.0.post0 14:09:56 regex==2023.8.8 14:09:56 requests==2.27.1 14:09:56 robotframework==6.1.1 14:09:56 robotframework-httplibrary==0.4.2 14:09:56 robotframework-pythonlibcore==3.0.0 14:09:56 robotframework-requests==0.9.4 14:09:56 robotframework-selenium2library==3.0.0 14:09:56 robotframework-seleniumlibrary==5.1.3 14:09:56 robotframework-sshlibrary==3.8.0 14:09:56 scapy==2.5.0 14:09:56 scp==0.14.5 14:09:56 selenium==3.141.0 14:09:56 six==1.16.0 14:09:56 soupsieve==2.3.2.post1 14:09:56 urllib3==1.26.18 14:09:56 waitress==2.0.0 14:09:56 WebOb==1.8.7 14:09:56 WebTest==3.0.0 14:09:56 zipp==3.6.0 14:09:56 ++ mkdir -p /tmp/tmp.f2xXiAH3bV/src/onap 14:09:56 ++ rm -rf /tmp/tmp.f2xXiAH3bV/src/onap/testsuite 14:09:56 ++ python3 -m pip install -qq --upgrade --extra-index-url=https://nexus3.onap.org/repository/PyPi.staging/simple 'robotframework-onap==0.6.0.*' --pre 14:10:05 ++ echo 'Installing python confluent-kafka library' 14:10:05 Installing python confluent-kafka library 14:10:05 ++ python3 -m pip install -qq confluent-kafka 14:10:10 ++ echo 'Uninstall docker-py and reinstall docker.' 14:10:10 Uninstall docker-py and reinstall docker. 14:10:10 ++ python3 -m pip uninstall -y -qq docker 14:10:10 ++ python3 -m pip install -U -qq docker 14:10:11 ++ python3 -m pip -qq freeze 14:10:11 bcrypt==4.0.1 14:10:11 beautifulsoup4==4.12.3 14:10:11 bitarray==2.9.2 14:10:11 certifi==2024.2.2 14:10:11 cffi==1.15.1 14:10:11 charset-normalizer==2.0.12 14:10:11 confluent-kafka==2.3.0 14:10:11 cryptography==40.0.2 14:10:11 decorator==5.1.1 14:10:11 deepdiff==5.7.0 14:10:11 dnspython==2.2.1 14:10:11 docker==5.0.3 14:10:11 elasticsearch==7.17.9 14:10:11 elasticsearch-dsl==7.4.1 14:10:11 enum34==1.1.10 14:10:11 future==1.0.0 14:10:11 idna==3.6 14:10:11 importlib-resources==5.4.0 14:10:11 ipaddr==2.2.0 14:10:11 isodate==0.6.1 14:10:11 Jinja2==3.0.3 14:10:11 jmespath==0.10.0 14:10:11 jsonpatch==1.32 14:10:11 jsonpath-rw==1.4.0 14:10:11 jsonpointer==2.3 14:10:11 kafka-python==2.0.2 14:10:11 lxml==5.2.1 14:10:11 MarkupSafe==2.0.1 14:10:11 more-itertools==5.0.0 14:10:11 netaddr==0.8.0 14:10:11 netifaces==0.11.0 14:10:11 odltools==0.1.28 14:10:11 ordered-set==4.0.2 14:10:11 paramiko==3.4.0 14:10:11 pbr==6.0.0 14:10:11 pkg_resources==0.0.0 14:10:11 ply==3.11 14:10:11 protobuf==3.19.6 14:10:11 pyang==2.6.0 14:10:11 pyangbind==0.8.1 14:10:11 pycparser==2.21 14:10:11 pyhocon==0.3.60 14:10:11 PyNaCl==1.5.0 14:10:11 pyparsing==3.1.2 14:10:11 python-dateutil==2.9.0.post0 14:10:11 PyYAML==6.0.1 14:10:11 regex==2023.8.8 14:10:11 requests==2.27.1 14:10:11 robotframework==6.1.1 14:10:11 robotframework-httplibrary==0.4.2 14:10:11 robotframework-onap==0.6.0.dev105 14:10:11 robotframework-pythonlibcore==3.0.0 14:10:11 robotframework-requests==0.9.4 14:10:11 robotframework-selenium2library==3.0.0 14:10:11 robotframework-seleniumlibrary==5.1.3 14:10:11 robotframework-sshlibrary==3.8.0 14:10:11 robotlibcore-temp==1.0.2 14:10:11 scapy==2.5.0 14:10:11 scp==0.14.5 14:10:11 selenium==3.141.0 14:10:11 six==1.16.0 14:10:11 soupsieve==2.3.2.post1 14:10:11 urllib3==1.26.18 14:10:11 waitress==2.0.0 14:10:11 WebOb==1.8.7 14:10:11 websocket-client==1.3.1 14:10:11 WebTest==3.0.0 14:10:11 zipp==3.6.0 14:10:11 ++ uname 14:10:11 ++ grep -q Linux 14:10:11 ++ sudo apt-get -y -qq install libxml2-utils 14:10:12 + load_set 14:10:12 + _setopts=ehuxB 14:10:12 ++ echo braceexpand:hashall:interactive-comments:nounset:xtrace 14:10:12 ++ tr : ' ' 14:10:12 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 14:10:12 + set +o braceexpand 14:10:12 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 14:10:12 + set +o hashall 14:10:12 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 14:10:12 + set +o interactive-comments 14:10:12 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 14:10:12 + set +o nounset 14:10:12 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 14:10:12 + set +o xtrace 14:10:12 ++ echo ehuxB 14:10:12 ++ sed 's/./& /g' 14:10:12 + for i in $(echo "$_setopts" | sed 's/./& /g') 14:10:12 + set +e 14:10:12 + for i in $(echo "$_setopts" | sed 's/./& /g') 14:10:12 + set +h 14:10:12 + for i in $(echo "$_setopts" | sed 's/./& /g') 14:10:12 + set +u 14:10:12 + for i in $(echo "$_setopts" | sed 's/./& /g') 14:10:12 + set +x 14:10:12 + source_safely /tmp/tmp.f2xXiAH3bV/bin/activate 14:10:12 + '[' -z /tmp/tmp.f2xXiAH3bV/bin/activate ']' 14:10:12 + relax_set 14:10:12 + set +e 14:10:12 + set +o pipefail 14:10:12 + . /tmp/tmp.f2xXiAH3bV/bin/activate 14:10:12 ++ deactivate nondestructive 14:10:12 ++ '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin ']' 14:10:12 ++ PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 14:10:12 ++ export PATH 14:10:12 ++ unset _OLD_VIRTUAL_PATH 14:10:12 ++ '[' -n '' ']' 14:10:12 ++ '[' -n /bin/bash -o -n '' ']' 14:10:12 ++ hash -r 14:10:12 ++ '[' -n '' ']' 14:10:12 ++ unset VIRTUAL_ENV 14:10:12 ++ '[' '!' nondestructive = nondestructive ']' 14:10:12 ++ VIRTUAL_ENV=/tmp/tmp.f2xXiAH3bV 14:10:12 ++ export VIRTUAL_ENV 14:10:12 ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 14:10:12 ++ PATH=/tmp/tmp.f2xXiAH3bV/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 14:10:12 ++ export PATH 14:10:12 ++ '[' -n '' ']' 14:10:12 ++ '[' -z '' ']' 14:10:12 ++ _OLD_VIRTUAL_PS1='(tmp.f2xXiAH3bV) ' 14:10:12 ++ '[' 'x(tmp.f2xXiAH3bV) ' '!=' x ']' 14:10:12 ++ PS1='(tmp.f2xXiAH3bV) (tmp.f2xXiAH3bV) ' 14:10:12 ++ export PS1 14:10:12 ++ '[' -n /bin/bash -o -n '' ']' 14:10:12 ++ hash -r 14:10:12 + load_set 14:10:12 + _setopts=hxB 14:10:12 ++ echo braceexpand:hashall:interactive-comments:xtrace 14:10:12 ++ tr : ' ' 14:10:12 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 14:10:12 + set +o braceexpand 14:10:12 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 14:10:12 + set +o hashall 14:10:12 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 14:10:12 + set +o interactive-comments 14:10:12 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 14:10:12 + set +o xtrace 14:10:12 ++ echo hxB 14:10:12 ++ sed 's/./& /g' 14:10:12 + for i in $(echo "$_setopts" | sed 's/./& /g') 14:10:12 + set +h 14:10:12 + for i in $(echo "$_setopts" | sed 's/./& /g') 14:10:12 + set +x 14:10:12 + export TEST_PLAN_DIR=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests 14:10:12 + TEST_PLAN_DIR=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests 14:10:12 + export TEST_OPTIONS= 14:10:12 + TEST_OPTIONS= 14:10:12 ++ mktemp -d 14:10:12 + WORKDIR=/tmp/tmp.6QreRUgV9i 14:10:12 + cd /tmp/tmp.6QreRUgV9i 14:10:12 + docker login -u docker -p docker nexus3.onap.org:10001 14:10:13 WARNING! Using --password via the CLI is insecure. Use --password-stdin. 14:10:13 WARNING! Your password will be stored unencrypted in /home/jenkins/.docker/config.json. 14:10:13 Configure a credential helper to remove this warning. See 14:10:13 https://docs.docker.com/engine/reference/commandline/login/#credentials-store 14:10:13 14:10:13 Login Succeeded 14:10:13 + SETUP=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh 14:10:13 + '[' -f /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh ']' 14:10:13 + echo 'Running setup script /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh' 14:10:13 Running setup script /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh 14:10:13 + source_safely /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh 14:10:13 + '[' -z /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh ']' 14:10:13 + relax_set 14:10:13 + set +e 14:10:13 + set +o pipefail 14:10:13 + . /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh 14:10:13 ++ source /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/node-templates.sh 14:10:13 +++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' 14:10:13 ++++ awk -F= '$1 == "defaultbranch" { print $2 }' /w/workspace/policy-pap-master-project-csit-pap/.gitreview 14:10:13 +++ GERRIT_BRANCH=master 14:10:13 +++ echo GERRIT_BRANCH=master 14:10:13 GERRIT_BRANCH=master 14:10:13 +++ rm -rf /w/workspace/policy-pap-master-project-csit-pap/models 14:10:13 +++ mkdir /w/workspace/policy-pap-master-project-csit-pap/models 14:10:13 +++ git clone -b master --single-branch https://github.com/onap/policy-models.git /w/workspace/policy-pap-master-project-csit-pap/models 14:10:13 Cloning into '/w/workspace/policy-pap-master-project-csit-pap/models'... 14:10:14 +++ export DATA=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies 14:10:14 +++ DATA=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies 14:10:14 +++ export NODETEMPLATES=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates 14:10:14 +++ NODETEMPLATES=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates 14:10:14 +++ sed -e 's!Measurement_vGMUX!ADifferentValue!' /w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies/vCPE.policy.monitoring.input.tosca.json 14:10:14 +++ sed -e 's!"version": "1.0.0"!"version": "2.0.0"!' -e 's!"policy-version": 1!"policy-version": 2!' /w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies/vCPE.policy.monitoring.input.tosca.json 14:10:14 ++ source /w/workspace/policy-pap-master-project-csit-pap/compose/start-compose.sh apex-pdp --grafana 14:10:14 +++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' 14:10:14 +++ COMPOSE_FOLDER=/w/workspace/policy-pap-master-project-csit-pap/compose 14:10:14 +++ grafana=false 14:10:14 +++ gui=false 14:10:14 +++ [[ 2 -gt 0 ]] 14:10:14 +++ key=apex-pdp 14:10:14 +++ case $key in 14:10:14 +++ echo apex-pdp 14:10:14 apex-pdp 14:10:14 +++ component=apex-pdp 14:10:14 +++ shift 14:10:14 +++ [[ 1 -gt 0 ]] 14:10:14 +++ key=--grafana 14:10:14 +++ case $key in 14:10:14 +++ grafana=true 14:10:14 +++ shift 14:10:14 +++ [[ 0 -gt 0 ]] 14:10:14 +++ cd /w/workspace/policy-pap-master-project-csit-pap/compose 14:10:14 +++ echo 'Configuring docker compose...' 14:10:14 Configuring docker compose... 14:10:14 +++ source export-ports.sh 14:10:14 +++ source get-versions.sh 14:10:18 +++ '[' -z pap ']' 14:10:18 +++ '[' -n apex-pdp ']' 14:10:18 +++ '[' apex-pdp == logs ']' 14:10:18 +++ '[' true = true ']' 14:10:18 +++ echo 'Starting apex-pdp application with Grafana' 14:10:18 Starting apex-pdp application with Grafana 14:10:18 +++ docker-compose up -d apex-pdp grafana 14:10:18 Creating network "compose_default" with the default driver 14:10:18 Pulling prometheus (nexus3.onap.org:10001/prom/prometheus:latest)... 14:10:19 latest: Pulling from prom/prometheus 14:10:22 Digest: sha256:dec2018ae55885fed717f25c289b8c9cff0bf5fbb9e619fb49b6161ac493c016 14:10:22 Status: Downloaded newer image for nexus3.onap.org:10001/prom/prometheus:latest 14:10:22 Pulling grafana (nexus3.onap.org:10001/grafana/grafana:latest)... 14:10:27 latest: Pulling from grafana/grafana 14:10:32 Digest: sha256:753bbb971071480d6630d3aa0d55345188c02f39456664f67c1ea443593638d0 14:10:32 Status: Downloaded newer image for nexus3.onap.org:10001/grafana/grafana:latest 14:10:32 Pulling mariadb (nexus3.onap.org:10001/mariadb:10.10.2)... 14:10:32 10.10.2: Pulling from mariadb 14:10:36 Digest: sha256:bfc25a68e113de43d0d112f5a7126df8e278579c3224e3923359e1c1d8d5ce6e 14:10:36 Status: Downloaded newer image for nexus3.onap.org:10001/mariadb:10.10.2 14:10:36 Pulling simulator (nexus3.onap.org:10001/onap/policy-models-simulator:3.1.1)... 14:10:36 3.1.1: Pulling from onap/policy-models-simulator 14:10:42 Digest: sha256:a22fada6cc93fcd88ed863d58b0b428eaaf13d3b02579e649141f6bdb5fac181 14:10:42 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-models-simulator:3.1.1 14:10:42 Pulling zookeeper (confluentinc/cp-zookeeper:latest)... 14:10:43 latest: Pulling from confluentinc/cp-zookeeper 14:10:57 Digest: sha256:9babd1c0beaf93189982bdbb9fe4bf194a2730298b640c057817746c19838866 14:10:57 Status: Downloaded newer image for confluentinc/cp-zookeeper:latest 14:10:57 Pulling kafka (confluentinc/cp-kafka:latest)... 14:10:57 latest: Pulling from confluentinc/cp-kafka 14:11:09 Digest: sha256:24cdd3a7fa89d2bed150560ebea81ff1943badfa61e51d66bb541a6b0d7fb047 14:11:09 Status: Downloaded newer image for confluentinc/cp-kafka:latest 14:11:09 Pulling policy-db-migrator (nexus3.onap.org:10001/onap/policy-db-migrator:3.1.2-SNAPSHOT)... 14:11:09 3.1.2-SNAPSHOT: Pulling from onap/policy-db-migrator 14:11:13 Digest: sha256:60a680475999b7df727a4e4ae6dd0391d3a6f4fffbde0f8c3faea985c8443c48 14:11:13 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-db-migrator:3.1.2-SNAPSHOT 14:11:13 Pulling api (nexus3.onap.org:10001/onap/policy-api:3.1.1)... 14:11:15 3.1.1: Pulling from onap/policy-api 14:11:17 Digest: sha256:73823c235d74d2500efd44b527f0e010b15469552561a2052fab717e6646a352 14:11:17 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-api:3.1.1 14:11:17 Pulling pap (nexus3.onap.org:10001/onap/policy-pap:3.1.1)... 14:11:17 3.1.1: Pulling from onap/policy-pap 14:11:19 Digest: sha256:2271905a2e80443fc6baa2f2141445192fe325d5c557920b1f4880541288e18d 14:11:19 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-pap:3.1.1 14:11:19 Pulling apex-pdp (nexus3.onap.org:10001/onap/policy-apex-pdp:3.1.2-SNAPSHOT)... 14:11:19 3.1.2-SNAPSHOT: Pulling from onap/policy-apex-pdp 14:11:27 Digest: sha256:3f9880e060c3465862043c69561fa1d43ab448175d1adf3efd53d751d3b9947d 14:11:27 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-apex-pdp:3.1.2-SNAPSHOT 14:11:27 Creating prometheus ... 14:11:27 Creating compose_zookeeper_1 ... 14:11:27 Creating simulator ... 14:11:27 Creating mariadb ... 14:11:45 Creating simulator ... done 14:11:46 Creating mariadb ... done 14:11:46 Creating policy-db-migrator ... 14:11:47 Creating policy-db-migrator ... done 14:11:47 Creating policy-api ... 14:11:48 Creating policy-api ... done 14:11:49 Creating prometheus ... done 14:11:49 Creating grafana ... 14:11:50 Creating grafana ... done 14:11:51 Creating compose_zookeeper_1 ... done 14:11:51 Creating kafka ... 14:11:52 Creating kafka ... done 14:11:52 Creating policy-pap ... 14:11:53 Creating policy-pap ... done 14:11:53 Creating policy-apex-pdp ... 14:11:54 Creating policy-apex-pdp ... done 14:11:54 +++ echo 'Prometheus server: http://localhost:30259' 14:11:54 Prometheus server: http://localhost:30259 14:11:54 +++ echo 'Grafana server: http://localhost:30269' 14:11:54 Grafana server: http://localhost:30269 14:11:54 +++ cd /w/workspace/policy-pap-master-project-csit-pap 14:11:54 ++ sleep 10 14:12:04 ++ unset http_proxy https_proxy 14:12:04 ++ bash /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/wait_for_rest.sh localhost 30003 14:12:04 Waiting for REST to come up on localhost port 30003... 14:12:04 NAMES STATUS 14:12:04 policy-apex-pdp Up 10 seconds 14:12:04 policy-pap Up 11 seconds 14:12:04 kafka Up 12 seconds 14:12:04 grafana Up 14 seconds 14:12:04 policy-api Up 16 seconds 14:12:04 compose_zookeeper_1 Up 13 seconds 14:12:04 mariadb Up 18 seconds 14:12:04 simulator Up 19 seconds 14:12:04 prometheus Up 15 seconds 14:12:10 NAMES STATUS 14:12:10 policy-apex-pdp Up 15 seconds 14:12:10 policy-pap Up 16 seconds 14:12:10 kafka Up 17 seconds 14:12:10 grafana Up 19 seconds 14:12:10 policy-api Up 21 seconds 14:12:10 compose_zookeeper_1 Up 18 seconds 14:12:10 mariadb Up 23 seconds 14:12:10 simulator Up 24 seconds 14:12:10 prometheus Up 20 seconds 14:12:15 NAMES STATUS 14:12:15 policy-apex-pdp Up 20 seconds 14:12:15 policy-pap Up 21 seconds 14:12:15 kafka Up 22 seconds 14:12:15 grafana Up 24 seconds 14:12:15 policy-api Up 26 seconds 14:12:15 compose_zookeeper_1 Up 23 seconds 14:12:15 mariadb Up 28 seconds 14:12:15 simulator Up 29 seconds 14:12:15 prometheus Up 25 seconds 14:12:20 NAMES STATUS 14:12:20 policy-apex-pdp Up 25 seconds 14:12:20 policy-pap Up 26 seconds 14:12:20 kafka Up 27 seconds 14:12:20 grafana Up 29 seconds 14:12:20 policy-api Up 31 seconds 14:12:20 compose_zookeeper_1 Up 28 seconds 14:12:20 mariadb Up 33 seconds 14:12:20 simulator Up 34 seconds 14:12:20 prometheus Up 30 seconds 14:12:25 NAMES STATUS 14:12:25 policy-apex-pdp Up 30 seconds 14:12:25 policy-pap Up 31 seconds 14:12:25 kafka Up 32 seconds 14:12:25 grafana Up 34 seconds 14:12:25 policy-api Up 36 seconds 14:12:25 compose_zookeeper_1 Up 33 seconds 14:12:25 mariadb Up 38 seconds 14:12:25 simulator Up 39 seconds 14:12:25 prometheus Up 35 seconds 14:12:25 ++ export 'SUITES=pap-test.robot 14:12:25 pap-slas.robot' 14:12:25 ++ SUITES='pap-test.robot 14:12:25 pap-slas.robot' 14:12:25 ++ ROBOT_VARIABLES='-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 14:12:25 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates' 14:12:25 + load_set 14:12:25 + _setopts=hxB 14:12:25 ++ echo braceexpand:hashall:interactive-comments:xtrace 14:12:25 ++ tr : ' ' 14:12:25 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 14:12:25 + set +o braceexpand 14:12:25 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 14:12:25 + set +o hashall 14:12:25 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 14:12:25 + set +o interactive-comments 14:12:25 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 14:12:25 + set +o xtrace 14:12:25 ++ echo hxB 14:12:25 ++ sed 's/./& /g' 14:12:25 + for i in $(echo "$_setopts" | sed 's/./& /g') 14:12:25 + set +h 14:12:25 + for i in $(echo "$_setopts" | sed 's/./& /g') 14:12:25 + set +x 14:12:25 + docker_stats 14:12:25 + tee /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap/_sysinfo-1-after-setup.txt 14:12:25 ++ uname -s 14:12:25 + '[' Linux == Darwin ']' 14:12:25 + sh -c 'top -bn1 | head -3' 14:12:25 top - 14:12:25 up 5 min, 0 users, load average: 3.27, 1.57, 0.65 14:12:25 Tasks: 210 total, 1 running, 131 sleeping, 0 stopped, 0 zombie 14:12:25 %Cpu(s): 11.6 us, 2.4 sy, 0.0 ni, 80.5 id, 5.4 wa, 0.0 hi, 0.1 si, 0.1 st 14:12:25 + echo 14:12:25 + sh -c 'free -h' 14:12:25 14:12:25 + echo 14:12:25 + docker ps --format 'table {{ .Names }}\t{{ .Status }}' 14:12:25 total used free shared buff/cache available 14:12:25 Mem: 31G 2.5G 22G 1.3M 6.2G 28G 14:12:25 Swap: 1.0G 0B 1.0G 14:12:25 14:12:25 NAMES STATUS 14:12:25 policy-apex-pdp Up 30 seconds 14:12:25 policy-pap Up 31 seconds 14:12:25 kafka Up 32 seconds 14:12:25 grafana Up 34 seconds 14:12:25 policy-api Up 36 seconds 14:12:25 compose_zookeeper_1 Up 33 seconds 14:12:25 mariadb Up 38 seconds 14:12:25 simulator Up 39 seconds 14:12:25 prometheus Up 35 seconds 14:12:25 + echo 14:12:25 + docker stats --no-stream 14:12:25 14:12:28 CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 14:12:28 d2c3e0b17a61 policy-apex-pdp 193.67% 193.9MiB / 31.41GiB 0.60% 7.07kB / 6.85kB 0B / 0B 48 14:12:28 85e3881d71a2 policy-pap 9.57% 518.7MiB / 31.41GiB 1.61% 28kB / 29.8kB 0B / 153MB 61 14:12:28 ef04caad11d3 kafka 55.49% 390.8MiB / 31.41GiB 1.21% 69.7kB / 73.2kB 0B / 500kB 83 14:12:28 a62c613b33fc grafana 0.02% 53.69MiB / 31.41GiB 0.17% 18.4kB / 3.18kB 0B / 24.9MB 18 14:12:28 acf71fa6ff00 policy-api 0.09% 496.5MiB / 31.41GiB 1.54% 1e+03kB / 710kB 0B / 0B 55 14:12:28 e6905731a0ff compose_zookeeper_1 0.09% 99.88MiB / 31.41GiB 0.31% 56.3kB / 49.5kB 0B / 385kB 60 14:12:28 5e76f0512c7d mariadb 0.01% 102.1MiB / 31.41GiB 0.32% 995kB / 1.19MB 11MB / 63.7MB 41 14:12:28 6d002923de7f simulator 0.08% 122.1MiB / 31.41GiB 0.38% 1.81kB / 0B 168kB / 0B 76 14:12:28 0e5aa80e1ba3 prometheus 0.00% 18.21MiB / 31.41GiB 0.06% 1.28kB / 158B 0B / 0B 13 14:12:28 + echo 14:12:28 14:12:28 + cd /tmp/tmp.6QreRUgV9i 14:12:28 + echo 'Reading the testplan:' 14:12:28 Reading the testplan: 14:12:28 + echo 'pap-test.robot 14:12:28 pap-slas.robot' 14:12:28 + egrep -v '(^[[:space:]]*#|^[[:space:]]*$)' 14:12:28 + sed 's|^|/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/|' 14:12:28 + cat testplan.txt 14:12:28 /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot 14:12:28 /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot 14:12:28 ++ xargs 14:12:28 + SUITES='/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot' 14:12:28 + echo 'ROBOT_VARIABLES=-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 14:12:28 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates' 14:12:28 ROBOT_VARIABLES=-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 14:12:28 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates 14:12:28 + echo 'Starting Robot test suites /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot ...' 14:12:28 Starting Robot test suites /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot ... 14:12:28 + relax_set 14:12:28 + set +e 14:12:28 + set +o pipefail 14:12:28 + python3 -m robot.run -N pap -v WORKSPACE:/tmp -v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot 14:12:28 ============================================================================== 14:12:28 pap 14:12:28 ============================================================================== 14:12:28 pap.Pap-Test 14:12:28 ============================================================================== 14:12:29 LoadPolicy :: Create a policy named 'onap.restart.tca' and version... | PASS | 14:12:29 ------------------------------------------------------------------------------ 14:12:29 LoadPolicyWithMetadataSet :: Create a policy named 'operational.ap... | PASS | 14:12:29 ------------------------------------------------------------------------------ 14:12:30 LoadNodeTemplates :: Create node templates in database using speci... | PASS | 14:12:30 ------------------------------------------------------------------------------ 14:12:30 Healthcheck :: Verify policy pap health check | PASS | 14:12:30 ------------------------------------------------------------------------------ 14:12:51 Consolidated Healthcheck :: Verify policy consolidated health check | PASS | 14:12:51 ------------------------------------------------------------------------------ 14:12:51 Metrics :: Verify policy pap is exporting prometheus metrics | PASS | 14:12:51 ------------------------------------------------------------------------------ 14:12:51 AddPdpGroup :: Add a new PdpGroup named 'testGroup' in the policy ... | PASS | 14:12:51 ------------------------------------------------------------------------------ 14:12:51 QueryPdpGroupsBeforeActivation :: Verify PdpGroups before activation | PASS | 14:12:51 ------------------------------------------------------------------------------ 14:12:52 ActivatePdpGroup :: Change the state of PdpGroup named 'testGroup'... | PASS | 14:12:52 ------------------------------------------------------------------------------ 14:12:52 QueryPdpGroupsAfterActivation :: Verify PdpGroups after activation | PASS | 14:12:52 ------------------------------------------------------------------------------ 14:12:52 DeployPdpGroups :: Deploy policies in PdpGroups | PASS | 14:12:52 ------------------------------------------------------------------------------ 14:12:52 QueryPdpGroupsAfterDeploy :: Verify PdpGroups after undeploy | PASS | 14:12:52 ------------------------------------------------------------------------------ 14:12:52 QueryPolicyAuditAfterDeploy :: Verify policy audit record after de... | PASS | 14:12:52 ------------------------------------------------------------------------------ 14:12:53 QueryPolicyAuditWithMetadataSetAfterDeploy :: Verify policy audit ... | PASS | 14:12:53 ------------------------------------------------------------------------------ 14:12:53 UndeployPolicy :: Undeploy a policy named 'onap.restart.tca' from ... | PASS | 14:12:53 ------------------------------------------------------------------------------ 14:12:53 UndeployPolicyWithMetadataSet :: Undeploy a policy named 'operatio... | PASS | 14:12:53 ------------------------------------------------------------------------------ 14:12:53 QueryPdpGroupsAfterUndeploy :: Verify PdpGroups after undeploy | PASS | 14:12:53 ------------------------------------------------------------------------------ 14:13:13 QueryPolicyAuditAfterUnDeploy :: Verify policy audit record after ... | PASS | 14:13:13 ------------------------------------------------------------------------------ 14:13:14 QueryPolicyAuditWithMetadataSetAfterUnDeploy :: Verify policy audi... | PASS | 14:13:14 ------------------------------------------------------------------------------ 14:13:14 DeactivatePdpGroup :: Change the state of PdpGroup named 'testGrou... | PASS | 14:13:14 ------------------------------------------------------------------------------ 14:13:14 DeletePdpGroups :: Delete the PdpGroup named 'testGroup' from poli... | PASS | 14:13:14 ------------------------------------------------------------------------------ 14:13:14 QueryPdpGroupsAfterDelete :: Verify PdpGroups after delete | PASS | 14:13:14 ------------------------------------------------------------------------------ 14:13:14 pap.Pap-Test | PASS | 14:13:14 22 tests, 22 passed, 0 failed 14:13:14 ============================================================================== 14:13:14 pap.Pap-Slas 14:13:14 ============================================================================== 14:14:14 WaitForPrometheusServer :: Wait for Prometheus server to gather al... | PASS | 14:14:14 ------------------------------------------------------------------------------ 14:14:14 ValidateResponseTimeForHealthcheck :: Validate component healthche... | PASS | 14:14:14 ------------------------------------------------------------------------------ 14:14:14 ValidateResponseTimeForSystemHealthcheck :: Validate if system hea... | PASS | 14:14:14 ------------------------------------------------------------------------------ 14:14:14 ValidateResponseTimeQueryPolicyAudit :: Validate query audits resp... | PASS | 14:14:14 ------------------------------------------------------------------------------ 14:14:14 ValidateResponseTimeUpdateGroup :: Validate pdps/group response time | PASS | 14:14:14 ------------------------------------------------------------------------------ 14:14:14 ValidatePolicyDeploymentTime :: Check if deployment of policy is u... | PASS | 14:14:14 ------------------------------------------------------------------------------ 14:14:14 ValidateResponseTimeDeletePolicy :: Check if undeployment of polic... | PASS | 14:14:14 ------------------------------------------------------------------------------ 14:14:14 ValidateResponseTimeDeleteGroup :: Validate delete group response ... | PASS | 14:14:14 ------------------------------------------------------------------------------ 14:14:14 pap.Pap-Slas | PASS | 14:14:14 8 tests, 8 passed, 0 failed 14:14:14 ============================================================================== 14:14:14 pap | PASS | 14:14:14 30 tests, 30 passed, 0 failed 14:14:14 ============================================================================== 14:14:14 Output: /tmp/tmp.6QreRUgV9i/output.xml 14:14:14 Log: /tmp/tmp.6QreRUgV9i/log.html 14:14:14 Report: /tmp/tmp.6QreRUgV9i/report.html 14:14:14 + RESULT=0 14:14:14 + load_set 14:14:14 + _setopts=hxB 14:14:14 ++ echo braceexpand:hashall:interactive-comments:xtrace 14:14:14 ++ tr : ' ' 14:14:14 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 14:14:14 + set +o braceexpand 14:14:14 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 14:14:14 + set +o hashall 14:14:14 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 14:14:14 + set +o interactive-comments 14:14:14 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 14:14:14 + set +o xtrace 14:14:14 ++ echo hxB 14:14:14 ++ sed 's/./& /g' 14:14:14 + for i in $(echo "$_setopts" | sed 's/./& /g') 14:14:14 + set +h 14:14:14 + for i in $(echo "$_setopts" | sed 's/./& /g') 14:14:14 + set +x 14:14:14 + echo 'RESULT: 0' 14:14:14 RESULT: 0 14:14:14 + exit 0 14:14:14 + on_exit 14:14:14 + rc=0 14:14:14 + [[ -n /w/workspace/policy-pap-master-project-csit-pap ]] 14:14:14 + docker ps --format 'table {{ .Names }}\t{{ .Status }}' 14:14:14 NAMES STATUS 14:14:14 policy-apex-pdp Up 2 minutes 14:14:14 policy-pap Up 2 minutes 14:14:14 kafka Up 2 minutes 14:14:14 grafana Up 2 minutes 14:14:14 policy-api Up 2 minutes 14:14:14 compose_zookeeper_1 Up 2 minutes 14:14:14 mariadb Up 2 minutes 14:14:14 simulator Up 2 minutes 14:14:14 prometheus Up 2 minutes 14:14:14 + docker_stats 14:14:14 ++ uname -s 14:14:14 + '[' Linux == Darwin ']' 14:14:14 + sh -c 'top -bn1 | head -3' 14:14:15 top - 14:14:15 up 6 min, 0 users, load average: 0.73, 1.22, 0.62 14:14:15 Tasks: 200 total, 1 running, 129 sleeping, 0 stopped, 0 zombie 14:14:15 %Cpu(s): 9.8 us, 1.9 sy, 0.0 ni, 83.9 id, 4.3 wa, 0.0 hi, 0.1 si, 0.1 st 14:14:15 + echo 14:14:15 14:14:15 + sh -c 'free -h' 14:14:15 total used free shared buff/cache available 14:14:15 Mem: 31G 2.8G 22G 1.3M 6.2G 28G 14:14:15 Swap: 1.0G 0B 1.0G 14:14:15 + echo 14:14:15 14:14:15 + docker ps --format 'table {{ .Names }}\t{{ .Status }}' 14:14:15 NAMES STATUS 14:14:15 policy-apex-pdp Up 2 minutes 14:14:15 policy-pap Up 2 minutes 14:14:15 kafka Up 2 minutes 14:14:15 grafana Up 2 minutes 14:14:15 policy-api Up 2 minutes 14:14:15 compose_zookeeper_1 Up 2 minutes 14:14:15 mariadb Up 2 minutes 14:14:15 simulator Up 2 minutes 14:14:15 prometheus Up 2 minutes 14:14:15 + echo 14:14:15 14:14:15 + docker stats --no-stream 14:14:17 CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 14:14:17 d2c3e0b17a61 policy-apex-pdp 0.48% 188.9MiB / 31.41GiB 0.59% 55.6kB / 89.6kB 0B / 0B 52 14:14:17 85e3881d71a2 policy-pap 1.03% 536.8MiB / 31.41GiB 1.67% 2.33MB / 806kB 0B / 153MB 65 14:14:17 ef04caad11d3 kafka 1.06% 382.4MiB / 31.41GiB 1.19% 237kB / 214kB 0B / 606kB 85 14:14:17 a62c613b33fc grafana 0.06% 61MiB / 31.41GiB 0.19% 19.5kB / 4.45kB 0B / 24.9MB 18 14:14:17 acf71fa6ff00 policy-api 0.10% 563.8MiB / 31.41GiB 1.75% 2.49MB / 1.26MB 0B / 0B 58 14:14:17 e6905731a0ff compose_zookeeper_1 0.09% 100.2MiB / 31.41GiB 0.31% 59.2kB / 51.1kB 0B / 385kB 60 14:14:17 5e76f0512c7d mariadb 0.02% 103.4MiB / 31.41GiB 0.32% 1.95MB / 4.77MB 11MB / 64.1MB 28 14:14:17 6d002923de7f simulator 0.08% 122.2MiB / 31.41GiB 0.38% 2.12kB / 0B 168kB / 0B 78 14:14:17 0e5aa80e1ba3 prometheus 0.00% 25.39MiB / 31.41GiB 0.08% 181kB / 10.9kB 0B / 0B 13 14:14:17 + echo 14:14:17 14:14:17 + source_safely /w/workspace/policy-pap-master-project-csit-pap/compose/stop-compose.sh 14:14:17 + '[' -z /w/workspace/policy-pap-master-project-csit-pap/compose/stop-compose.sh ']' 14:14:17 + relax_set 14:14:17 + set +e 14:14:17 + set +o pipefail 14:14:17 + . /w/workspace/policy-pap-master-project-csit-pap/compose/stop-compose.sh 14:14:17 ++ echo 'Shut down started!' 14:14:17 Shut down started! 14:14:17 ++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' 14:14:17 ++ COMPOSE_FOLDER=/w/workspace/policy-pap-master-project-csit-pap/compose 14:14:17 ++ cd /w/workspace/policy-pap-master-project-csit-pap/compose 14:14:17 ++ source export-ports.sh 14:14:17 ++ source get-versions.sh 14:14:20 ++ echo 'Collecting logs from docker compose containers...' 14:14:20 Collecting logs from docker compose containers... 14:14:20 ++ docker-compose logs 14:14:21 ++ cat docker_compose.log 14:14:21 Attaching to policy-apex-pdp, policy-pap, kafka, grafana, policy-api, policy-db-migrator, compose_zookeeper_1, mariadb, simulator, prometheus 14:14:21 grafana | logger=settings t=2024-04-09T14:11:50.841763984Z level=info msg="Starting Grafana" version=10.4.1 commit=d94d597d847c05085542c29dfad6b3f469cc77e1 branch=v10.4.x compiled=2024-04-09T14:11:50Z 14:14:21 grafana | logger=settings t=2024-04-09T14:11:50.842967186Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini 14:14:21 grafana | logger=settings t=2024-04-09T14:11:50.843132929Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini 14:14:21 grafana | logger=settings t=2024-04-09T14:11:50.84319117Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana" 14:14:21 grafana | logger=settings t=2024-04-09T14:11:50.843253311Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana" 14:14:21 grafana | logger=settings t=2024-04-09T14:11:50.843300622Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins" 14:14:21 grafana | logger=settings t=2024-04-09T14:11:50.843365933Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning" 14:14:21 grafana | logger=settings t=2024-04-09T14:11:50.843448295Z level=info msg="Config overridden from command line" arg="default.log.mode=console" 14:14:21 grafana | logger=settings t=2024-04-09T14:11:50.843506056Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana" 14:14:21 grafana | logger=settings t=2024-04-09T14:11:50.843619668Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana" 14:14:21 grafana | logger=settings t=2024-04-09T14:11:50.843694779Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins" 14:14:21 grafana | logger=settings t=2024-04-09T14:11:50.843775381Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning" 14:14:21 grafana | logger=settings t=2024-04-09T14:11:50.843872793Z level=info msg=Target target=[all] 14:14:21 grafana | logger=settings t=2024-04-09T14:11:50.843931884Z level=info msg="Path Home" path=/usr/share/grafana 14:14:21 grafana | logger=settings t=2024-04-09T14:11:50.844016455Z level=info msg="Path Data" path=/var/lib/grafana 14:14:21 grafana | logger=settings t=2024-04-09T14:11:50.844062226Z level=info msg="Path Logs" path=/var/log/grafana 14:14:21 grafana | logger=settings t=2024-04-09T14:11:50.844146588Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins 14:14:21 grafana | logger=settings t=2024-04-09T14:11:50.844178148Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning 14:14:21 grafana | logger=settings t=2024-04-09T14:11:50.84428847Z level=info msg="App mode production" 14:14:21 grafana | logger=sqlstore t=2024-04-09T14:11:50.844781909Z level=info msg="Connecting to DB" dbtype=sqlite3 14:14:21 grafana | logger=sqlstore t=2024-04-09T14:11:50.844875971Z level=info msg="Creating SQLite database file" path=/var/lib/grafana/grafana.db 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:50.845683716Z level=info msg="Starting DB migrations" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:50.846806516Z level=info msg="Executing migration" id="create migration_log table" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:50.847742014Z level=info msg="Migration successfully executed" id="create migration_log table" duration=934.568µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:50.889773833Z level=info msg="Executing migration" id="create user table" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:50.891019576Z level=info msg="Migration successfully executed" id="create user table" duration=1.251573ms 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:50.894056932Z level=info msg="Executing migration" id="add unique index user.login" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:50.89504398Z level=info msg="Migration successfully executed" id="add unique index user.login" duration=986.988µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:50.909824141Z level=info msg="Executing migration" id="add unique index user.email" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:50.910796318Z level=info msg="Migration successfully executed" id="add unique index user.email" duration=976.378µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:50.913118141Z level=info msg="Executing migration" id="drop index UQE_user_login - v1" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:50.913727322Z level=info msg="Migration successfully executed" id="drop index UQE_user_login - v1" duration=609.421µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:50.916126846Z level=info msg="Executing migration" id="drop index UQE_user_email - v1" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:50.916760648Z level=info msg="Migration successfully executed" id="drop index UQE_user_email - v1" duration=634.492µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:50.921473434Z level=info msg="Executing migration" id="Rename table user to user_v1 - v1" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:50.925808833Z level=info msg="Migration successfully executed" id="Rename table user to user_v1 - v1" duration=4.335459ms 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:50.929106464Z level=info msg="Executing migration" id="create user table v2" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:50.930063842Z level=info msg="Migration successfully executed" id="create user table v2" duration=956.938µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:50.932837092Z level=info msg="Executing migration" id="create index UQE_user_login - v2" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:50.933682588Z level=info msg="Migration successfully executed" id="create index UQE_user_login - v2" duration=845.506µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:50.938432015Z level=info msg="Executing migration" id="create index UQE_user_email - v2" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:50.9392856Z level=info msg="Migration successfully executed" id="create index UQE_user_email - v2" duration=853.385µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:50.942090222Z level=info msg="Executing migration" id="copy data_source v1 to v2" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:50.942652822Z level=info msg="Migration successfully executed" id="copy data_source v1 to v2" duration=559.43µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:50.945570865Z level=info msg="Executing migration" id="Drop old table user_v1" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:50.946260448Z level=info msg="Migration successfully executed" id="Drop old table user_v1" duration=688.523µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:50.949142231Z level=info msg="Executing migration" id="Add column help_flags1 to user table" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:50.950449645Z level=info msg="Migration successfully executed" id="Add column help_flags1 to user table" duration=1.306924ms 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:50.956521736Z level=info msg="Executing migration" id="Update user table charset" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:50.956617028Z level=info msg="Migration successfully executed" id="Update user table charset" duration=95.572µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:50.959288257Z level=info msg="Executing migration" id="Add last_seen_at column to user" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:50.960483049Z level=info msg="Migration successfully executed" id="Add last_seen_at column to user" duration=1.194842ms 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:50.963256229Z level=info msg="Executing migration" id="Add missing user data" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:50.963620786Z level=info msg="Migration successfully executed" id="Add missing user data" duration=363.687µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:50.966474599Z level=info msg="Executing migration" id="Add is_disabled column to user" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:50.967743902Z level=info msg="Migration successfully executed" id="Add is_disabled column to user" duration=1.251802ms 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.143345125Z level=info msg="Executing migration" id="Add index user.login/user.email" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.145015536Z level=info msg="Migration successfully executed" id="Add index user.login/user.email" duration=1.670531ms 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.148584231Z level=info msg="Executing migration" id="Add is_service_account column to user" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.15069257Z level=info msg="Migration successfully executed" id="Add is_service_account column to user" duration=2.107768ms 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.153816777Z level=info msg="Executing migration" id="Update is_service_account column to nullable" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.161807103Z level=info msg="Migration successfully executed" id="Update is_service_account column to nullable" duration=7.989346ms 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.166805204Z level=info msg="Executing migration" id="Add uid column to user" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.168086848Z level=info msg="Migration successfully executed" id="Add uid column to user" duration=1.281244ms 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.171096133Z level=info msg="Executing migration" id="Update uid column values for users" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.171574552Z level=info msg="Migration successfully executed" id="Update uid column values for users" duration=478.519µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.175772309Z level=info msg="Executing migration" id="Add unique index user_uid" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.177115203Z level=info msg="Migration successfully executed" id="Add unique index user_uid" duration=1.342835ms 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.18129513Z level=info msg="Executing migration" id="update login field with orgid to allow for multiple service accounts with same name across orgs" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.181757118Z level=info msg="Migration successfully executed" id="update login field with orgid to allow for multiple service accounts with same name across orgs" duration=462.288µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.188091064Z level=info msg="Executing migration" id="create temp user table v1-7" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.18954694Z level=info msg="Migration successfully executed" id="create temp user table v1-7" duration=1.454176ms 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.193797998Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v1-7" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.195201174Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v1-7" duration=1.403176ms 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.301997187Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v1-7" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.303555286Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v1-7" duration=1.561359ms 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.30922753Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v1-7" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.310722587Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v1-7" duration=1.494087ms 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.314419205Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v1-7" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.315397923Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v1-7" duration=976.048µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.318978178Z level=info msg="Executing migration" id="Update temp_user table charset" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.31907725Z level=info msg="Migration successfully executed" id="Update temp_user table charset" duration=98.892µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.324849945Z level=info msg="Executing migration" id="drop index IDX_temp_user_email - v1" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.325732102Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_email - v1" duration=879.776µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.328415771Z level=info msg="Executing migration" id="drop index IDX_temp_user_org_id - v1" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.329360318Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_org_id - v1" duration=944.788µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.332362553Z level=info msg="Executing migration" id="drop index IDX_temp_user_code - v1" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.333202108Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_code - v1" duration=839.435µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.3377074Z level=info msg="Executing migration" id="drop index IDX_temp_user_status - v1" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.338593307Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_status - v1" duration=885.727µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.351058265Z level=info msg="Executing migration" id="Rename table temp_user to temp_user_tmp_qwerty - v1" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.3562655Z level=info msg="Migration successfully executed" id="Rename table temp_user to temp_user_tmp_qwerty - v1" duration=5.205815ms 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.361680139Z level=info msg="Executing migration" id="create temp_user v2" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.362729748Z level=info msg="Migration successfully executed" id="create temp_user v2" duration=1.048709ms 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.379288991Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v2" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.380932061Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v2" duration=1.64243ms 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.384810092Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v2" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.385675998Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v2" duration=865.376µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.390051188Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v2" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.390958935Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v2" duration=904.276µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.394043361Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v2" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.394918787Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v2" duration=875.106µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.400185623Z level=info msg="Executing migration" id="copy temp_user v1 to v2" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.400672592Z level=info msg="Migration successfully executed" id="copy temp_user v1 to v2" duration=487.169µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.403081086Z level=info msg="Executing migration" id="drop temp_user_tmp_qwerty" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.404059344Z level=info msg="Migration successfully executed" id="drop temp_user_tmp_qwerty" duration=978.218µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.407573679Z level=info msg="Executing migration" id="Set created for temp users that will otherwise prematurely expire" 14:14:21 zookeeper_1 | ===> User 14:14:21 zookeeper_1 | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) 14:14:21 zookeeper_1 | ===> Configuring ... 14:14:21 zookeeper_1 | ===> Running preflight checks ... 14:14:21 zookeeper_1 | ===> Check if /var/lib/zookeeper/data is writable ... 14:14:21 zookeeper_1 | ===> Check if /var/lib/zookeeper/log is writable ... 14:14:21 zookeeper_1 | ===> Launching ... 14:14:21 zookeeper_1 | ===> Launching zookeeper ... 14:14:21 zookeeper_1 | [2024-04-09 14:11:55,370] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 14:14:21 zookeeper_1 | [2024-04-09 14:11:55,377] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 14:14:21 zookeeper_1 | [2024-04-09 14:11:55,377] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 14:14:21 zookeeper_1 | [2024-04-09 14:11:55,377] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 14:14:21 zookeeper_1 | [2024-04-09 14:11:55,377] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 14:14:21 zookeeper_1 | [2024-04-09 14:11:55,379] INFO autopurge.snapRetainCount set to 3 (org.apache.zookeeper.server.DatadirCleanupManager) 14:14:21 zookeeper_1 | [2024-04-09 14:11:55,379] INFO autopurge.purgeInterval set to 0 (org.apache.zookeeper.server.DatadirCleanupManager) 14:14:21 zookeeper_1 | [2024-04-09 14:11:55,379] INFO Purge task is not scheduled. (org.apache.zookeeper.server.DatadirCleanupManager) 14:14:21 zookeeper_1 | [2024-04-09 14:11:55,379] WARN Either no config or no quorum defined in config, running in standalone mode (org.apache.zookeeper.server.quorum.QuorumPeerMain) 14:14:21 zookeeper_1 | [2024-04-09 14:11:55,380] INFO Log4j 1.2 jmx support not found; jmx disabled. (org.apache.zookeeper.jmx.ManagedUtil) 14:14:21 zookeeper_1 | [2024-04-09 14:11:55,381] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 14:14:21 zookeeper_1 | [2024-04-09 14:11:55,381] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 14:14:21 zookeeper_1 | [2024-04-09 14:11:55,381] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 14:14:21 zookeeper_1 | [2024-04-09 14:11:55,381] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 14:14:21 zookeeper_1 | [2024-04-09 14:11:55,382] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 14:14:21 zookeeper_1 | [2024-04-09 14:11:55,382] INFO Starting server (org.apache.zookeeper.server.ZooKeeperServerMain) 14:14:21 zookeeper_1 | [2024-04-09 14:11:55,393] INFO ServerMetrics initialized with provider org.apache.zookeeper.metrics.impl.DefaultMetricsProvider@26275bef (org.apache.zookeeper.server.ServerMetrics) 14:14:21 zookeeper_1 | [2024-04-09 14:11:55,396] INFO ACL digest algorithm is: SHA1 (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) 14:14:21 zookeeper_1 | [2024-04-09 14:11:55,396] INFO zookeeper.DigestAuthenticationProvider.enabled = true (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) 14:14:21 zookeeper_1 | [2024-04-09 14:11:55,398] INFO zookeeper.snapshot.trust.empty : false (org.apache.zookeeper.server.persistence.FileTxnSnapLog) 14:14:21 zookeeper_1 | [2024-04-09 14:11:55,407] INFO (org.apache.zookeeper.server.ZooKeeperServer) 14:14:21 zookeeper_1 | [2024-04-09 14:11:55,407] INFO ______ _ (org.apache.zookeeper.server.ZooKeeperServer) 14:14:21 zookeeper_1 | [2024-04-09 14:11:55,408] INFO |___ / | | (org.apache.zookeeper.server.ZooKeeperServer) 14:14:21 zookeeper_1 | [2024-04-09 14:11:55,408] INFO / / ___ ___ | | __ ___ ___ _ __ ___ _ __ (org.apache.zookeeper.server.ZooKeeperServer) 14:14:21 zookeeper_1 | [2024-04-09 14:11:55,408] INFO / / / _ \ / _ \ | |/ / / _ \ / _ \ | '_ \ / _ \ | '__| (org.apache.zookeeper.server.ZooKeeperServer) 14:14:21 zookeeper_1 | [2024-04-09 14:11:55,408] INFO / /__ | (_) | | (_) | | < | __/ | __/ | |_) | | __/ | | (org.apache.zookeeper.server.ZooKeeperServer) 14:14:21 zookeeper_1 | [2024-04-09 14:11:55,408] INFO /_____| \___/ \___/ |_|\_\ \___| \___| | .__/ \___| |_| (org.apache.zookeeper.server.ZooKeeperServer) 14:14:21 zookeeper_1 | [2024-04-09 14:11:55,408] INFO | | (org.apache.zookeeper.server.ZooKeeperServer) 14:14:21 zookeeper_1 | [2024-04-09 14:11:55,408] INFO |_| (org.apache.zookeeper.server.ZooKeeperServer) 14:14:21 zookeeper_1 | [2024-04-09 14:11:55,408] INFO (org.apache.zookeeper.server.ZooKeeperServer) 14:14:21 zookeeper_1 | [2024-04-09 14:11:55,410] INFO Server environment:zookeeper.version=3.8.3-6ad6d364c7c0bcf0de452d54ebefa3058098ab56, built on 2023-10-05 10:34 UTC (org.apache.zookeeper.server.ZooKeeperServer) 14:14:21 zookeeper_1 | [2024-04-09 14:11:55,410] INFO Server environment:host.name=e6905731a0ff (org.apache.zookeeper.server.ZooKeeperServer) 14:14:21 zookeeper_1 | [2024-04-09 14:11:55,410] INFO Server environment:java.version=11.0.21 (org.apache.zookeeper.server.ZooKeeperServer) 14:14:21 zookeeper_1 | [2024-04-09 14:11:55,410] INFO Server environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.server.ZooKeeperServer) 14:14:21 zookeeper_1 | [2024-04-09 14:11:55,410] INFO Server environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.server.ZooKeeperServer) 14:14:21 zookeeper_1 | [2024-04-09 14:11:55,410] INFO Server environment:java.class.path=/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/jose4j-0.9.3.jar:/usr/bin/../share/java/kafka/commons-validator-1.7.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.100.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/commons-io-2.11.0.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/connect-mirror-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.11.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.100.Final.jar:/usr/bin/../share/java/kafka/jline-3.22.0.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-tools-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/connect-json-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.100.Final.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.5-1.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/scala-library-2.13.11.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/reflections-0.10.2.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/commons-beanutils-1.9.4.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/connect-transforms-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/connect-api-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/commons-digester-2.1.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-raft-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.8.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/checker-qual-3.19.0.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/pcollections-4.0.1.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-clients-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/jsr305-3.0.2.jar:/usr/bin/../share/java/kafka/trogdor-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-shell-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.100.Final.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.3.jar:/usr/bin/../share/java/kafka/connect-runtime-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-storage-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.100.Final.jar:/usr/bin/../share/java/kafka/commons-collections-3.2.2.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/caffeine-2.9.3.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.1.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.3.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/netty-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.53.v20231009.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.server.ZooKeeperServer) 14:14:21 zookeeper_1 | [2024-04-09 14:11:55,410] INFO Server environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.server.ZooKeeperServer) 14:14:21 zookeeper_1 | [2024-04-09 14:11:55,410] INFO Server environment:java.io.tmpdir=/tmp (org.apache.zookeeper.server.ZooKeeperServer) 14:14:21 zookeeper_1 | [2024-04-09 14:11:55,410] INFO Server environment:java.compiler= (org.apache.zookeeper.server.ZooKeeperServer) 14:14:21 zookeeper_1 | [2024-04-09 14:11:55,410] INFO Server environment:os.name=Linux (org.apache.zookeeper.server.ZooKeeperServer) 14:14:21 zookeeper_1 | [2024-04-09 14:11:55,410] INFO Server environment:os.arch=amd64 (org.apache.zookeeper.server.ZooKeeperServer) 14:14:21 zookeeper_1 | [2024-04-09 14:11:55,410] INFO Server environment:os.version=4.15.0-192-generic (org.apache.zookeeper.server.ZooKeeperServer) 14:14:21 zookeeper_1 | [2024-04-09 14:11:55,410] INFO Server environment:user.name=appuser (org.apache.zookeeper.server.ZooKeeperServer) 14:14:21 zookeeper_1 | [2024-04-09 14:11:55,410] INFO Server environment:user.home=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) 14:14:21 zookeeper_1 | [2024-04-09 14:11:55,410] INFO Server environment:user.dir=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) 14:14:21 zookeeper_1 | [2024-04-09 14:11:55,410] INFO Server environment:os.memory.free=490MB (org.apache.zookeeper.server.ZooKeeperServer) 14:14:21 zookeeper_1 | [2024-04-09 14:11:55,410] INFO Server environment:os.memory.max=512MB (org.apache.zookeeper.server.ZooKeeperServer) 14:14:21 zookeeper_1 | [2024-04-09 14:11:55,410] INFO Server environment:os.memory.total=512MB (org.apache.zookeeper.server.ZooKeeperServer) 14:14:21 zookeeper_1 | [2024-04-09 14:11:55,410] INFO zookeeper.enableEagerACLCheck = false (org.apache.zookeeper.server.ZooKeeperServer) 14:14:21 zookeeper_1 | [2024-04-09 14:11:55,410] INFO zookeeper.digest.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) 14:14:21 zookeeper_1 | [2024-04-09 14:11:55,410] INFO zookeeper.closeSessionTxn.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) 14:14:21 zookeeper_1 | [2024-04-09 14:11:55,410] INFO zookeeper.flushDelay = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) 14:14:21 zookeeper_1 | [2024-04-09 14:11:55,410] INFO zookeeper.maxWriteQueuePollTime = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) 14:14:21 zookeeper_1 | [2024-04-09 14:11:55,410] INFO zookeeper.maxBatchSize=1000 (org.apache.zookeeper.server.ZooKeeperServer) 14:14:21 zookeeper_1 | [2024-04-09 14:11:55,410] INFO zookeeper.intBufferStartingSizeBytes = 1024 (org.apache.zookeeper.server.ZooKeeperServer) 14:14:21 zookeeper_1 | [2024-04-09 14:11:55,411] INFO Weighed connection throttling is disabled (org.apache.zookeeper.server.BlueThrottle) 14:14:21 zookeeper_1 | [2024-04-09 14:11:55,412] INFO minSessionTimeout set to 4000 ms (org.apache.zookeeper.server.ZooKeeperServer) 14:14:21 zookeeper_1 | [2024-04-09 14:11:55,412] INFO maxSessionTimeout set to 40000 ms (org.apache.zookeeper.server.ZooKeeperServer) 14:14:21 zookeeper_1 | [2024-04-09 14:11:55,413] INFO getData response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) 14:14:21 zookeeper_1 | [2024-04-09 14:11:55,413] INFO getChildren response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) 14:14:21 zookeeper_1 | [2024-04-09 14:11:55,414] INFO zookeeper.pathStats.slotCapacity = 60 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 14:14:21 zookeeper_1 | [2024-04-09 14:11:55,414] INFO zookeeper.pathStats.slotDuration = 15 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 14:14:21 zookeeper_1 | [2024-04-09 14:11:55,414] INFO zookeeper.pathStats.maxDepth = 6 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 14:14:21 zookeeper_1 | [2024-04-09 14:11:55,414] INFO zookeeper.pathStats.initialDelay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 14:14:21 kafka | ===> User 14:14:21 kafka | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) 14:14:21 kafka | ===> Configuring ... 14:14:21 kafka | Running in Zookeeper mode... 14:14:21 kafka | ===> Running preflight checks ... 14:14:21 kafka | ===> Check if /var/lib/kafka/data is writable ... 14:14:21 kafka | ===> Check if Zookeeper is healthy ... 14:14:21 kafka | SLF4J: Class path contains multiple SLF4J bindings. 14:14:21 kafka | SLF4J: Found binding in [jar:file:/usr/share/java/kafka/slf4j-reload4j-1.7.36.jar!/org/slf4j/impl/StaticLoggerBinder.class] 14:14:21 kafka | SLF4J: Found binding in [jar:file:/usr/share/java/cp-base-new/slf4j-reload4j-1.7.36.jar!/org/slf4j/impl/StaticLoggerBinder.class] 14:14:21 kafka | SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. 14:14:21 kafka | SLF4J: Actual binding is of type [org.slf4j.impl.Reload4jLoggerFactory] 14:14:21 kafka | [2024-04-09 14:11:56,856] INFO Client environment:zookeeper.version=3.8.3-6ad6d364c7c0bcf0de452d54ebefa3058098ab56, built on 2023-10-05 10:34 UTC (org.apache.zookeeper.ZooKeeper) 14:14:21 kafka | [2024-04-09 14:11:56,857] INFO Client environment:host.name=ef04caad11d3 (org.apache.zookeeper.ZooKeeper) 14:14:21 kafka | [2024-04-09 14:11:56,857] INFO Client environment:java.version=11.0.21 (org.apache.zookeeper.ZooKeeper) 14:14:21 kafka | [2024-04-09 14:11:56,857] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) 14:14:21 kafka | [2024-04-09 14:11:56,857] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.408389903Z level=info msg="Migration successfully executed" id="Set created for temp users that will otherwise prematurely expire" duration=815.895µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.414265421Z level=info msg="Executing migration" id="create star table" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.415049745Z level=info msg="Migration successfully executed" id="create star table" duration=783.814µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.418300895Z level=info msg="Executing migration" id="add unique index star.user_id_dashboard_id" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.419206891Z level=info msg="Migration successfully executed" id="add unique index star.user_id_dashboard_id" duration=904.416µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.422744766Z level=info msg="Executing migration" id="create org table v1" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.424233173Z level=info msg="Migration successfully executed" id="create org table v1" duration=1.488057ms 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.427407011Z level=info msg="Executing migration" id="create index UQE_org_name - v1" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.428300678Z level=info msg="Migration successfully executed" id="create index UQE_org_name - v1" duration=896.757µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.43388942Z level=info msg="Executing migration" id="create org_user table v1" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.434763986Z level=info msg="Migration successfully executed" id="create org_user table v1" duration=874.346µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.437419024Z level=info msg="Executing migration" id="create index IDX_org_user_org_id - v1" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.438321921Z level=info msg="Migration successfully executed" id="create index IDX_org_user_org_id - v1" duration=902.917µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.442705451Z level=info msg="Executing migration" id="create index UQE_org_user_org_id_user_id - v1" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.443615948Z level=info msg="Migration successfully executed" id="create index UQE_org_user_org_id_user_id - v1" duration=910.597µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.4464715Z level=info msg="Executing migration" id="create index IDX_org_user_user_id - v1" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.447366946Z level=info msg="Migration successfully executed" id="create index IDX_org_user_user_id - v1" duration=895.736µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.452088653Z level=info msg="Executing migration" id="Update org table charset" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.452276456Z level=info msg="Migration successfully executed" id="Update org table charset" duration=188.853µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.45524345Z level=info msg="Executing migration" id="Update org_user table charset" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.455465104Z level=info msg="Migration successfully executed" id="Update org_user table charset" duration=223.214µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.45902226Z level=info msg="Executing migration" id="Migrate all Read Only Viewers to Viewers" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.459522619Z level=info msg="Migration successfully executed" id="Migrate all Read Only Viewers to Viewers" duration=498.409µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.463801107Z level=info msg="Executing migration" id="create dashboard table" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.464644842Z level=info msg="Migration successfully executed" id="create dashboard table" duration=842.855µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.469318518Z level=info msg="Executing migration" id="add index dashboard.account_id" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.470276725Z level=info msg="Migration successfully executed" id="add index dashboard.account_id" duration=955.187µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.473198679Z level=info msg="Executing migration" id="add unique index dashboard_account_id_slug" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.474245468Z level=info msg="Migration successfully executed" id="add unique index dashboard_account_id_slug" duration=1.045959ms 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.477001798Z level=info msg="Executing migration" id="create dashboard_tag table" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.477756622Z level=info msg="Migration successfully executed" id="create dashboard_tag table" duration=754.324µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.482697353Z level=info msg="Executing migration" id="add unique index dashboard_tag.dasboard_id_term" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.483583579Z level=info msg="Migration successfully executed" id="add unique index dashboard_tag.dasboard_id_term" duration=886.187µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.486333719Z level=info msg="Executing migration" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.487232186Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" duration=897.017µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.4907332Z level=info msg="Executing migration" id="Rename table dashboard to dashboard_v1 - v1" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.498043553Z level=info msg="Migration successfully executed" id="Rename table dashboard to dashboard_v1 - v1" duration=7.310403ms 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.503659006Z level=info msg="Executing migration" id="create dashboard v2" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.504592593Z level=info msg="Migration successfully executed" id="create dashboard v2" duration=933.887µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.50769081Z level=info msg="Executing migration" id="create index IDX_dashboard_org_id - v2" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.508528245Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_org_id - v2" duration=834.545µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.511872636Z level=info msg="Executing migration" id="create index UQE_dashboard_org_id_slug - v2" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.512833174Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_org_id_slug - v2" duration=960.068µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.516557332Z level=info msg="Executing migration" id="copy dashboard v1 to v2" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.51699044Z level=info msg="Migration successfully executed" id="copy dashboard v1 to v2" duration=431.988µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.521052254Z level=info msg="Executing migration" id="drop table dashboard_v1" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.521941631Z level=info msg="Migration successfully executed" id="drop table dashboard_v1" duration=888.817µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.525950034Z level=info msg="Executing migration" id="alter dashboard.data to mediumtext v1" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.526183368Z level=info msg="Migration successfully executed" id="alter dashboard.data to mediumtext v1" duration=233.604µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.531681629Z level=info msg="Executing migration" id="Add column updated_by in dashboard - v2" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.534703604Z level=info msg="Migration successfully executed" id="Add column updated_by in dashboard - v2" duration=3.020985ms 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.538076295Z level=info msg="Executing migration" id="Add column created_by in dashboard - v2" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.53998463Z level=info msg="Migration successfully executed" id="Add column created_by in dashboard - v2" duration=1.909405ms 14:14:21 kafka | [2024-04-09 14:11:56,857] INFO Client environment:java.class.path=/usr/share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/share/java/kafka/jersey-common-2.39.1.jar:/usr/share/java/kafka/swagger-annotations-2.2.8.jar:/usr/share/java/kafka/jose4j-0.9.3.jar:/usr/share/java/kafka/commons-validator-1.7.jar:/usr/share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/share/java/kafka/netty-transport-4.1.100.Final.jar:/usr/share/java/kafka/rocksdbjni-7.9.2.jar:/usr/share/java/kafka/jackson-annotations-2.13.5.jar:/usr/share/java/kafka/commons-io-2.11.0.jar:/usr/share/java/kafka/javax.activation-api-1.2.0.jar:/usr/share/java/kafka/connect-mirror-7.6.0-ccs.jar:/usr/share/java/kafka/commons-cli-1.4.jar:/usr/share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/share/java/kafka/scala-reflect-2.13.11.jar:/usr/share/java/kafka/netty-buffer-4.1.100.Final.jar:/usr/share/java/kafka/jline-3.22.0.jar:/usr/share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/share/java/kafka/kafka-server-common-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-tools-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-log4j-appender-7.6.0-ccs.jar:/usr/share/java/kafka/connect-json-7.6.0-ccs.jar:/usr/share/java/kafka/netty-resolver-4.1.100.Final.jar:/usr/share/java/kafka/hk2-api-2.6.1.jar:/usr/share/java/kafka/zstd-jni-1.5.5-1.jar:/usr/share/java/kafka/jetty-http-9.4.53.v20231009.jar:/usr/share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/share/java/kafka/kafka.jar:/usr/share/java/kafka/jetty-servlets-9.4.53.v20231009.jar:/usr/share/java/kafka/scala-library-2.13.11.jar:/usr/share/java/kafka/jakarta.inject-2.6.1.jar:/usr/share/java/kafka/jetty-continuation-9.4.53.v20231009.jar:/usr/share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/share/java/kafka/connect-basic-auth-extension-7.6.0-ccs.jar:/usr/share/java/kafka/hk2-locator-2.6.1.jar:/usr/share/java/kafka/reflections-0.10.2.jar:/usr/share/java/kafka/slf4j-api-1.7.36.jar:/usr/share/java/kafka/jetty-server-9.4.53.v20231009.jar:/usr/share/java/kafka/paranamer-2.8.jar:/usr/share/java/kafka/commons-beanutils-1.9.4.jar:/usr/share/java/kafka/jaxb-api-2.3.1.jar:/usr/share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/share/java/kafka/hk2-utils-2.6.1.jar:/usr/share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/share/java/kafka/kafka-streams-examples-7.6.0-ccs.jar:/usr/share/java/kafka/reload4j-1.2.25.jar:/usr/share/java/kafka/connect-transforms-7.6.0-ccs.jar:/usr/share/java/kafka/jackson-core-2.13.5.jar:/usr/share/java/kafka/jersey-hk2-2.39.1.jar:/usr/share/java/kafka/jackson-databind-2.13.5.jar:/usr/share/java/kafka/jersey-client-2.39.1.jar:/usr/share/java/kafka/jetty-io-9.4.53.v20231009.jar:/usr/share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/share/java/kafka/connect-api-7.6.0-ccs.jar:/usr/share/java/kafka/commons-digester-2.1.jar:/usr/share/java/kafka/netty-transport-native-epoll-4.1.100.Final.jar:/usr/share/java/kafka/argparse4j-0.7.0.jar:/usr/share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/share/java/kafka/kafka-raft-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-streams-scala_2.13-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-group-coordinator-7.6.0-ccs.jar:/usr/share/java/kafka/audience-annotations-0.12.0.jar:/usr/share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/share/java/kafka/kafka-storage-api-7.6.0-ccs.jar:/usr/share/java/kafka/maven-artifact-3.8.8.jar:/usr/share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/share/java/kafka/kafka-metadata-7.6.0-ccs.jar:/usr/share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/share/java/kafka/jersey-server-2.39.1.jar:/usr/share/java/kafka/commons-lang3-3.8.1.jar:/usr/share/java/kafka/kafka-streams-test-utils-7.6.0-ccs.jar:/usr/share/java/kafka/jopt-simple-5.0.4.jar:/usr/share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/share/java/kafka/lz4-java-1.8.0.jar:/usr/share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/share/java/kafka/checker-qual-3.19.0.jar:/usr/share/java/kafka/connect-mirror-client-7.6.0-ccs.jar:/usr/share/java/kafka/pcollections-4.0.1.jar:/usr/share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/share/java/kafka/jetty-client-9.4.53.v20231009.jar:/usr/share/java/kafka/jetty-util-9.4.53.v20231009.jar:/usr/share/java/kafka/kafka-tools-api-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-streams-7.6.0-ccs.jar:/usr/share/java/kafka/kafka_2.13-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-clients-7.6.0-ccs.jar:/usr/share/java/kafka/commons-logging-1.2.jar:/usr/share/java/kafka/jsr305-3.0.2.jar:/usr/share/java/kafka/trogdor-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-shell-7.6.0-ccs.jar:/usr/share/java/kafka/netty-codec-4.1.100.Final.jar:/usr/share/java/kafka/netty-transport-native-unix-common-4.1.100.Final.jar:/usr/share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/share/java/kafka/metrics-core-2.2.0.jar:/usr/share/java/kafka/zookeeper-jute-3.8.3.jar:/usr/share/java/kafka/connect-runtime-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-storage-7.6.0-ccs.jar:/usr/share/java/kafka/netty-handler-4.1.100.Final.jar:/usr/share/java/kafka/commons-collections-3.2.2.jar:/usr/share/java/kafka/javassist-3.29.2-GA.jar:/usr/share/java/kafka/caffeine-2.9.3.jar:/usr/share/java/kafka/plexus-utils-3.3.1.jar:/usr/share/java/kafka/zookeeper-3.8.3.jar:/usr/share/java/kafka/activation-1.1.1.jar:/usr/share/java/kafka/netty-common-4.1.100.Final.jar:/usr/share/java/kafka/metrics-core-4.1.12.1.jar:/usr/share/java/kafka/jetty-servlet-9.4.53.v20231009.jar:/usr/share/java/kafka/jetty-util-ajax-9.4.53.v20231009.jar:/usr/share/java/kafka/snappy-java-1.1.10.5.jar:/usr/share/java/kafka/netty-transport-classes-epoll-4.1.100.Final.jar:/usr/share/java/kafka/jetty-security-9.4.53.v20231009.jar:/usr/share/java/cp-base-new/scala-logging_2.13-3.9.4.jar:/usr/share/java/cp-base-new/jackson-datatype-jdk8-2.14.2.jar:/usr/share/java/cp-base-new/jose4j-0.9.3.jar:/usr/share/java/cp-base-new/commons-validator-1.7.jar:/usr/share/java/cp-base-new/logredactor-1.0.12.jar:/usr/share/java/cp-base-new/jolokia-core-1.7.1.jar:/usr/share/java/cp-base-new/re2j-1.6.jar:/usr/share/java/cp-base-new/scala-library-2.13.10.jar:/usr/share/java/cp-base-new/commons-io-2.11.0.jar:/usr/share/java/cp-base-new/commons-cli-1.4.jar:/usr/share/java/cp-base-new/slf4j-reload4j-1.7.36.jar:/usr/share/java/cp-base-new/jackson-annotations-2.14.2.jar:/usr/share/java/cp-base-new/json-simple-1.1.1.jar:/usr/share/java/cp-base-new/jackson-module-scala_2.13-2.14.2.jar:/usr/share/java/cp-base-new/scala-java8-compat_2.13-1.0.2.jar:/usr/share/java/cp-base-new/kafka-server-common-7.6.0-ccs.jar:/usr/share/java/cp-base-new/minimal-json-0.9.5.jar:/usr/share/java/cp-base-new/zstd-jni-1.5.5-1.jar:/usr/share/java/cp-base-new/jackson-dataformat-yaml-2.14.2.jar:/usr/share/java/cp-base-new/slf4j-api-1.7.36.jar:/usr/share/java/cp-base-new/paranamer-2.8.jar:/usr/share/java/cp-base-new/commons-beanutils-1.9.4.jar:/usr/share/java/cp-base-new/jmx_prometheus_javaagent-0.18.0.jar:/usr/share/java/cp-base-new/reload4j-1.2.25.jar:/usr/share/java/cp-base-new/jackson-core-2.14.2.jar:/usr/share/java/cp-base-new/common-utils-7.6.0.jar:/usr/share/java/cp-base-new/commons-digester-2.1.jar:/usr/share/java/cp-base-new/argparse4j-0.7.0.jar:/usr/share/java/cp-base-new/kafka-raft-7.6.0-ccs.jar:/usr/share/java/cp-base-new/kafka-group-coordinator-7.6.0-ccs.jar:/usr/share/java/cp-base-new/audience-annotations-0.12.0.jar:/usr/share/java/cp-base-new/gson-2.9.0.jar:/usr/share/java/cp-base-new/snakeyaml-2.0.jar:/usr/share/java/cp-base-new/kafka-storage-api-7.6.0-ccs.jar:/usr/share/java/cp-base-new/kafka-metadata-7.6.0-ccs.jar:/usr/share/java/cp-base-new/disk-usage-agent-7.6.0.jar:/usr/share/java/cp-base-new/jopt-simple-5.0.4.jar:/usr/share/java/cp-base-new/error_prone_annotations-2.10.0.jar:/usr/share/java/cp-base-new/lz4-java-1.8.0.jar:/usr/share/java/cp-base-new/logredactor-metrics-1.0.12.jar:/usr/share/java/cp-base-new/checker-qual-3.19.0.jar:/usr/share/java/cp-base-new/pcollections-4.0.1.jar:/usr/share/java/cp-base-new/kafka-tools-api-7.6.0-ccs.jar:/usr/share/java/cp-base-new/kafka_2.13-7.6.0-ccs.jar:/usr/share/java/cp-base-new/kafka-clients-7.6.0-ccs.jar:/usr/share/java/cp-base-new/commons-logging-1.2.jar:/usr/share/java/cp-base-new/scala-reflect-2.13.10.jar:/usr/share/java/cp-base-new/scala-collection-compat_2.13-2.10.0.jar:/usr/share/java/cp-base-new/metrics-core-2.2.0.jar:/usr/share/java/cp-base-new/zookeeper-jute-3.8.3.jar:/usr/share/java/cp-base-new/jackson-dataformat-csv-2.14.2.jar:/usr/share/java/cp-base-new/utility-belt-7.6.0.jar:/usr/share/java/cp-base-new/kafka-storage-7.6.0-ccs.jar:/usr/share/java/cp-base-new/commons-collections-3.2.2.jar:/usr/share/java/cp-base-new/caffeine-2.9.3.jar:/usr/share/java/cp-base-new/jolokia-jvm-1.7.1.jar:/usr/share/java/cp-base-new/zookeeper-3.8.3.jar:/usr/share/java/cp-base-new/metrics-core-4.1.12.1.jar:/usr/share/java/cp-base-new/jackson-databind-2.14.2.jar:/usr/share/java/cp-base-new/snappy-java-1.1.10.5.jar (org.apache.zookeeper.ZooKeeper) 14:14:21 kafka | [2024-04-09 14:11:56,857] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) 14:14:21 kafka | [2024-04-09 14:11:56,857] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) 14:14:21 kafka | [2024-04-09 14:11:56,857] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) 14:14:21 kafka | [2024-04-09 14:11:56,857] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) 14:14:21 kafka | [2024-04-09 14:11:56,857] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) 14:14:21 kafka | [2024-04-09 14:11:56,857] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) 14:14:21 kafka | [2024-04-09 14:11:56,857] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) 14:14:21 kafka | [2024-04-09 14:11:56,857] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) 14:14:21 kafka | [2024-04-09 14:11:56,857] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) 14:14:21 kafka | [2024-04-09 14:11:56,857] INFO Client environment:os.memory.free=487MB (org.apache.zookeeper.ZooKeeper) 14:14:21 kafka | [2024-04-09 14:11:56,857] INFO Client environment:os.memory.max=8042MB (org.apache.zookeeper.ZooKeeper) 14:14:21 kafka | [2024-04-09 14:11:56,857] INFO Client environment:os.memory.total=504MB (org.apache.zookeeper.ZooKeeper) 14:14:21 kafka | [2024-04-09 14:11:56,860] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=40000 watcher=io.confluent.admin.utils.ZookeeperConnectionWatcher@184cf7cf (org.apache.zookeeper.ZooKeeper) 14:14:21 kafka | [2024-04-09 14:11:56,863] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) 14:14:21 kafka | [2024-04-09 14:11:56,867] INFO jute.maxbuffer value is 1048575 Bytes (org.apache.zookeeper.ClientCnxnSocket) 14:14:21 kafka | [2024-04-09 14:11:56,874] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) 14:14:21 kafka | [2024-04-09 14:11:56,887] INFO Opening socket connection to server zookeeper/172.17.0.4:2181. (org.apache.zookeeper.ClientCnxn) 14:14:21 kafka | [2024-04-09 14:11:56,887] INFO SASL config status: Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn) 14:14:21 kafka | [2024-04-09 14:11:56,894] INFO Socket connection established, initiating session, client: /172.17.0.9:37666, server: zookeeper/172.17.0.4:2181 (org.apache.zookeeper.ClientCnxn) 14:14:21 kafka | [2024-04-09 14:11:56,930] INFO Session establishment complete on server zookeeper/172.17.0.4:2181, session id = 0x100000445790000, negotiated timeout = 40000 (org.apache.zookeeper.ClientCnxn) 14:14:21 kafka | [2024-04-09 14:11:57,050] INFO Session: 0x100000445790000 closed (org.apache.zookeeper.ZooKeeper) 14:14:21 kafka | [2024-04-09 14:11:57,050] INFO EventThread shut down for session: 0x100000445790000 (org.apache.zookeeper.ClientCnxn) 14:14:21 kafka | Using log4j config /etc/kafka/log4j.properties 14:14:21 kafka | ===> Launching ... 14:14:21 kafka | ===> Launching kafka ... 14:14:21 kafka | [2024-04-09 14:11:57,724] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$) 14:14:21 kafka | [2024-04-09 14:11:58,051] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) 14:14:21 kafka | [2024-04-09 14:11:58,131] INFO Registered signal handlers for TERM, INT, HUP (org.apache.kafka.common.utils.LoggingSignalHandler) 14:14:21 kafka | [2024-04-09 14:11:58,133] INFO starting (kafka.server.KafkaServer) 14:14:21 kafka | [2024-04-09 14:11:58,134] INFO Connecting to zookeeper on zookeeper:2181 (kafka.server.KafkaServer) 14:14:21 kafka | [2024-04-09 14:11:58,148] INFO [ZooKeeperClient Kafka server] Initializing a new session to zookeeper:2181. (kafka.zookeeper.ZooKeeperClient) 14:14:21 kafka | [2024-04-09 14:11:58,152] INFO Client environment:zookeeper.version=3.8.3-6ad6d364c7c0bcf0de452d54ebefa3058098ab56, built on 2023-10-05 10:34 UTC (org.apache.zookeeper.ZooKeeper) 14:14:21 kafka | [2024-04-09 14:11:58,152] INFO Client environment:host.name=ef04caad11d3 (org.apache.zookeeper.ZooKeeper) 14:14:21 kafka | [2024-04-09 14:11:58,152] INFO Client environment:java.version=11.0.21 (org.apache.zookeeper.ZooKeeper) 14:14:21 kafka | [2024-04-09 14:11:58,152] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) 14:14:21 kafka | [2024-04-09 14:11:58,152] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) 14:14:21 kafka | [2024-04-09 14:11:58,152] INFO Client environment:java.class.path=/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/jose4j-0.9.3.jar:/usr/bin/../share/java/kafka/commons-validator-1.7.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.100.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/commons-io-2.11.0.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/connect-mirror-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.11.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.100.Final.jar:/usr/bin/../share/java/kafka/jline-3.22.0.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-tools-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/connect-json-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.100.Final.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.5-1.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/scala-library-2.13.11.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/reflections-0.10.2.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/commons-beanutils-1.9.4.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/connect-transforms-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/connect-api-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/commons-digester-2.1.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-raft-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.8.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/checker-qual-3.19.0.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/pcollections-4.0.1.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-clients-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/jsr305-3.0.2.jar:/usr/bin/../share/java/kafka/trogdor-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-shell-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.100.Final.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.3.jar:/usr/bin/../share/java/kafka/connect-runtime-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-storage-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.100.Final.jar:/usr/bin/../share/java/kafka/commons-collections-3.2.2.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/caffeine-2.9.3.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.1.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.3.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/netty-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.53.v20231009.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.ZooKeeper) 14:14:21 kafka | [2024-04-09 14:11:58,152] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) 14:14:21 kafka | [2024-04-09 14:11:58,152] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) 14:14:21 kafka | [2024-04-09 14:11:58,153] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) 14:14:21 kafka | [2024-04-09 14:11:58,153] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) 14:14:21 kafka | [2024-04-09 14:11:58,153] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) 14:14:21 kafka | [2024-04-09 14:11:58,153] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) 14:14:21 kafka | [2024-04-09 14:11:58,153] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) 14:14:21 kafka | [2024-04-09 14:11:58,153] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) 14:14:21 kafka | [2024-04-09 14:11:58,153] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) 14:14:21 kafka | [2024-04-09 14:11:58,153] INFO Client environment:os.memory.free=1007MB (org.apache.zookeeper.ZooKeeper) 14:14:21 kafka | [2024-04-09 14:11:58,153] INFO Client environment:os.memory.max=1024MB (org.apache.zookeeper.ZooKeeper) 14:14:21 kafka | [2024-04-09 14:11:58,153] INFO Client environment:os.memory.total=1024MB (org.apache.zookeeper.ZooKeeper) 14:14:21 kafka | [2024-04-09 14:11:58,157] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=18000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@1f6c9cd8 (org.apache.zookeeper.ZooKeeper) 14:14:21 kafka | [2024-04-09 14:11:58,164] INFO jute.maxbuffer value is 4194304 Bytes (org.apache.zookeeper.ClientCnxnSocket) 14:14:21 kafka | [2024-04-09 14:11:58,171] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.543430894Z level=info msg="Executing migration" id="Add column gnetId in dashboard" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.547397526Z level=info msg="Migration successfully executed" id="Add column gnetId in dashboard" duration=3.966553ms 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.551365189Z level=info msg="Executing migration" id="Add index for gnetId in dashboard" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.552192554Z level=info msg="Migration successfully executed" id="Add index for gnetId in dashboard" duration=825.475µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.557017442Z level=info msg="Executing migration" id="Add column plugin_id in dashboard" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.559038849Z level=info msg="Migration successfully executed" id="Add column plugin_id in dashboard" duration=2.020557ms 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.562266328Z level=info msg="Executing migration" id="Add index for plugin_id in dashboard" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.563246916Z level=info msg="Migration successfully executed" id="Add index for plugin_id in dashboard" duration=977.488µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.569961549Z level=info msg="Executing migration" id="Add index for dashboard_id in dashboard_tag" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.571233862Z level=info msg="Migration successfully executed" id="Add index for dashboard_id in dashboard_tag" duration=1.274093ms 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.574522062Z level=info msg="Executing migration" id="Update dashboard table charset" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.574748466Z level=info msg="Migration successfully executed" id="Update dashboard table charset" duration=226.214µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.579805299Z level=info msg="Executing migration" id="Update dashboard_tag table charset" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.580013633Z level=info msg="Migration successfully executed" id="Update dashboard_tag table charset" duration=207.074µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.585087876Z level=info msg="Executing migration" id="Add column folder_id in dashboard" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.587180984Z level=info msg="Migration successfully executed" id="Add column folder_id in dashboard" duration=2.093199ms 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.592548582Z level=info msg="Executing migration" id="Add column isFolder in dashboard" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.594684931Z level=info msg="Migration successfully executed" id="Add column isFolder in dashboard" duration=2.135949ms 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.60174251Z level=info msg="Executing migration" id="Add column has_acl in dashboard" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.603856749Z level=info msg="Migration successfully executed" id="Add column has_acl in dashboard" duration=2.113859ms 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.609483822Z level=info msg="Executing migration" id="Add column uid in dashboard" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.612973436Z level=info msg="Migration successfully executed" id="Add column uid in dashboard" duration=3.483843ms 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.620201528Z level=info msg="Executing migration" id="Update uid column values in dashboard" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.620528294Z level=info msg="Migration successfully executed" id="Update uid column values in dashboard" duration=327.386µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.628051701Z level=info msg="Executing migration" id="Add unique index dashboard_org_id_uid" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.628895387Z level=info msg="Migration successfully executed" id="Add unique index dashboard_org_id_uid" duration=843.386µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.637367672Z level=info msg="Executing migration" id="Remove unique index org_id_slug" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.638743697Z level=info msg="Migration successfully executed" id="Remove unique index org_id_slug" duration=1.379455ms 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.643779709Z level=info msg="Executing migration" id="Update dashboard title length" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.64383012Z level=info msg="Migration successfully executed" id="Update dashboard title length" duration=52.961µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.651399368Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_title_folder_id" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.652489238Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_title_folder_id" duration=1.08812ms 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.659786002Z level=info msg="Executing migration" id="create dashboard_provisioning" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.660986414Z level=info msg="Migration successfully executed" id="create dashboard_provisioning" duration=1.200682ms 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.667475203Z level=info msg="Executing migration" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.674093993Z level=info msg="Migration successfully executed" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" duration=6.61957ms 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.723431046Z level=info msg="Executing migration" id="create dashboard_provisioning v2" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.724466385Z level=info msg="Migration successfully executed" id="create dashboard_provisioning v2" duration=1.038979ms 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.746747472Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id - v2" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.747925634Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id - v2" duration=1.179662ms 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.763532469Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.765587317Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" duration=2.052888ms 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.773203877Z level=info msg="Executing migration" id="copy dashboard_provisioning v1 to v2" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.773503042Z level=info msg="Migration successfully executed" id="copy dashboard_provisioning v1 to v2" duration=298.776µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.778820139Z level=info msg="Executing migration" id="drop dashboard_provisioning_tmp_qwerty" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.779666875Z level=info msg="Migration successfully executed" id="drop dashboard_provisioning_tmp_qwerty" duration=844.646µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.78705182Z level=info msg="Executing migration" id="Add check_sum column" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.789196019Z level=info msg="Migration successfully executed" id="Add check_sum column" duration=2.143589ms 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.803970909Z level=info msg="Executing migration" id="Add index for dashboard_title" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.806020917Z level=info msg="Migration successfully executed" id="Add index for dashboard_title" duration=2.050638ms 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.813693047Z level=info msg="Executing migration" id="delete tags for deleted dashboards" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.81388498Z level=info msg="Migration successfully executed" id="delete tags for deleted dashboards" duration=192.183µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.818982724Z level=info msg="Executing migration" id="delete stars for deleted dashboards" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.819508823Z level=info msg="Migration successfully executed" id="delete stars for deleted dashboards" duration=525.129µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.82258368Z level=info msg="Executing migration" id="Add index for dashboard_is_folder" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.824036496Z level=info msg="Migration successfully executed" id="Add index for dashboard_is_folder" duration=1.453766ms 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.829346703Z level=info msg="Executing migration" id="Add isPublic for dashboard" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.833144303Z level=info msg="Migration successfully executed" id="Add isPublic for dashboard" duration=3.79348ms 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.840846034Z level=info msg="Executing migration" id="create data_source table" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.841775211Z level=info msg="Migration successfully executed" id="create data_source table" duration=933.097µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.847059097Z level=info msg="Executing migration" id="add index data_source.account_id" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.848983352Z level=info msg="Migration successfully executed" id="add index data_source.account_id" duration=1.924585ms 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.856229645Z level=info msg="Executing migration" id="add unique index data_source.account_id_name" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.859085577Z level=info msg="Migration successfully executed" id="add unique index data_source.account_id_name" duration=2.836422ms 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.867597163Z level=info msg="Executing migration" id="drop index IDX_data_source_account_id - v1" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.868954498Z level=info msg="Migration successfully executed" id="drop index IDX_data_source_account_id - v1" duration=1.357685ms 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.879582122Z level=info msg="Executing migration" id="drop index UQE_data_source_account_id_name - v1" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.880512059Z level=info msg="Migration successfully executed" id="drop index UQE_data_source_account_id_name - v1" duration=929.967µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.886655642Z level=info msg="Executing migration" id="Rename table data_source to data_source_v1 - v1" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.89473645Z level=info msg="Migration successfully executed" id="Rename table data_source to data_source_v1 - v1" duration=8.081537ms 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.902844998Z level=info msg="Executing migration" id="create data_source table v2" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.903777935Z level=info msg="Migration successfully executed" id="create data_source table v2" duration=932.987µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.913495133Z level=info msg="Executing migration" id="create index IDX_data_source_org_id - v2" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.91498939Z level=info msg="Migration successfully executed" id="create index IDX_data_source_org_id - v2" duration=1.541718ms 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.920924578Z level=info msg="Executing migration" id="create index UQE_data_source_org_id_name - v2" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.921786214Z level=info msg="Migration successfully executed" id="create index UQE_data_source_org_id_name - v2" duration=861.496µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.929166429Z level=info msg="Executing migration" id="Drop old table data_source_v1 #2" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.930360421Z level=info msg="Migration successfully executed" id="Drop old table data_source_v1 #2" duration=1.205402ms 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.936549064Z level=info msg="Executing migration" id="Add column with_credentials" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.939973947Z level=info msg="Migration successfully executed" id="Add column with_credentials" duration=3.424043ms 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.943919109Z level=info msg="Executing migration" id="Add secure json data column" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.94619181Z level=info msg="Migration successfully executed" id="Add secure json data column" duration=2.272241ms 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.952694099Z level=info msg="Executing migration" id="Update data_source table charset" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.95272367Z level=info msg="Migration successfully executed" id="Update data_source table charset" duration=26.211µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.95983859Z level=info msg="Executing migration" id="Update initial version to 1" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.960162586Z level=info msg="Migration successfully executed" id="Update initial version to 1" duration=324.626µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.969516047Z level=info msg="Executing migration" id="Add read_only data column" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.973164864Z level=info msg="Migration successfully executed" id="Add read_only data column" duration=3.648287ms 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.976969294Z level=info msg="Executing migration" id="Migrate logging ds to loki ds" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.977508703Z level=info msg="Migration successfully executed" id="Migrate logging ds to loki ds" duration=539.429µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.981955425Z level=info msg="Executing migration" id="Update json_data with nulls" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.982183459Z level=info msg="Migration successfully executed" id="Update json_data with nulls" duration=228.604µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.988625357Z level=info msg="Executing migration" id="Add uid column" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:51.992288954Z level=info msg="Migration successfully executed" id="Add uid column" duration=3.663537ms 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.000943622Z level=info msg="Executing migration" id="Update uid value" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.001186757Z level=info msg="Migration successfully executed" id="Update uid value" duration=242.474µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.004693831Z level=info msg="Executing migration" id="Add unique index datasource_org_id_uid" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.005532086Z level=info msg="Migration successfully executed" id="Add unique index datasource_org_id_uid" duration=838.115µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.009514539Z level=info msg="Executing migration" id="add unique index datasource_org_id_is_default" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.010390841Z level=info msg="Migration successfully executed" id="add unique index datasource_org_id_is_default" duration=876.013µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.016564031Z level=info msg="Executing migration" id="create api_key table" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.017962561Z level=info msg="Migration successfully executed" id="create api_key table" duration=1.39774ms 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.025237347Z level=info msg="Executing migration" id="add index api_key.account_id" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.026432594Z level=info msg="Migration successfully executed" id="add index api_key.account_id" duration=1.198057ms 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.031243714Z level=info msg="Executing migration" id="add index api_key.key" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.032391861Z level=info msg="Migration successfully executed" id="add index api_key.key" duration=1.149467ms 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.037749378Z level=info msg="Executing migration" id="add index api_key.account_id_name" 14:14:21 zookeeper_1 | [2024-04-09 14:11:55,414] INFO zookeeper.pathStats.delay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 14:14:21 zookeeper_1 | [2024-04-09 14:11:55,414] INFO zookeeper.pathStats.enabled = false (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 14:14:21 zookeeper_1 | [2024-04-09 14:11:55,416] INFO The max bytes for all large requests are set to 104857600 (org.apache.zookeeper.server.ZooKeeperServer) 14:14:21 zookeeper_1 | [2024-04-09 14:11:55,416] INFO The large request threshold is set to -1 (org.apache.zookeeper.server.ZooKeeperServer) 14:14:21 zookeeper_1 | [2024-04-09 14:11:55,416] INFO zookeeper.enforce.auth.enabled = false (org.apache.zookeeper.server.AuthenticationHelper) 14:14:21 zookeeper_1 | [2024-04-09 14:11:55,416] INFO zookeeper.enforce.auth.schemes = [] (org.apache.zookeeper.server.AuthenticationHelper) 14:14:21 zookeeper_1 | [2024-04-09 14:11:55,416] INFO Created server with tickTime 2000 ms minSessionTimeout 4000 ms maxSessionTimeout 40000 ms clientPortListenBacklog -1 datadir /var/lib/zookeeper/log/version-2 snapdir /var/lib/zookeeper/data/version-2 (org.apache.zookeeper.server.ZooKeeperServer) 14:14:21 zookeeper_1 | [2024-04-09 14:11:55,445] INFO Logging initialized @544ms to org.eclipse.jetty.util.log.Slf4jLog (org.eclipse.jetty.util.log) 14:14:21 zookeeper_1 | [2024-04-09 14:11:55,519] WARN o.e.j.s.ServletContextHandler@5be1d0a4{/,null,STOPPED} contextPath ends with /* (org.eclipse.jetty.server.handler.ContextHandler) 14:14:21 zookeeper_1 | [2024-04-09 14:11:55,519] WARN Empty contextPath (org.eclipse.jetty.server.handler.ContextHandler) 14:14:21 zookeeper_1 | [2024-04-09 14:11:55,539] INFO jetty-9.4.53.v20231009; built: 2023-10-09T12:29:09.265Z; git: 27bde00a0b95a1d5bbee0eae7984f891d2d0f8c9; jvm 11.0.21+9-LTS (org.eclipse.jetty.server.Server) 14:14:21 zookeeper_1 | [2024-04-09 14:11:55,570] INFO DefaultSessionIdManager workerName=node0 (org.eclipse.jetty.server.session) 14:14:21 zookeeper_1 | [2024-04-09 14:11:55,570] INFO No SessionScavenger set, using defaults (org.eclipse.jetty.server.session) 14:14:21 zookeeper_1 | [2024-04-09 14:11:55,572] INFO node0 Scavenging every 660000ms (org.eclipse.jetty.server.session) 14:14:21 zookeeper_1 | [2024-04-09 14:11:55,575] WARN ServletContext@o.e.j.s.ServletContextHandler@5be1d0a4{/,null,STARTING} has uncovered http methods for path: /* (org.eclipse.jetty.security.SecurityHandler) 14:14:21 zookeeper_1 | [2024-04-09 14:11:55,583] INFO Started o.e.j.s.ServletContextHandler@5be1d0a4{/,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler) 14:14:21 zookeeper_1 | [2024-04-09 14:11:55,603] INFO Started ServerConnector@4f32a3ad{HTTP/1.1, (http/1.1)}{0.0.0.0:8080} (org.eclipse.jetty.server.AbstractConnector) 14:14:21 zookeeper_1 | [2024-04-09 14:11:55,603] INFO Started @702ms (org.eclipse.jetty.server.Server) 14:14:21 zookeeper_1 | [2024-04-09 14:11:55,603] INFO Started AdminServer on address 0.0.0.0, port 8080 and command URL /commands (org.apache.zookeeper.server.admin.JettyAdminServer) 14:14:21 zookeeper_1 | [2024-04-09 14:11:55,609] INFO Using org.apache.zookeeper.server.NIOServerCnxnFactory as server connection factory (org.apache.zookeeper.server.ServerCnxnFactory) 14:14:21 zookeeper_1 | [2024-04-09 14:11:55,609] WARN maxCnxns is not configured, using default value 0. (org.apache.zookeeper.server.ServerCnxnFactory) 14:14:21 zookeeper_1 | [2024-04-09 14:11:55,611] INFO Configuring NIO connection handler with 10s sessionless connection timeout, 2 selector thread(s), 16 worker threads, and 64 kB direct buffers. (org.apache.zookeeper.server.NIOServerCnxnFactory) 14:14:21 zookeeper_1 | [2024-04-09 14:11:55,612] INFO binding to port 0.0.0.0/0.0.0.0:2181 (org.apache.zookeeper.server.NIOServerCnxnFactory) 14:14:21 zookeeper_1 | [2024-04-09 14:11:55,625] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) 14:14:21 zookeeper_1 | [2024-04-09 14:11:55,625] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) 14:14:21 zookeeper_1 | [2024-04-09 14:11:55,626] INFO zookeeper.snapshotSizeFactor = 0.33 (org.apache.zookeeper.server.ZKDatabase) 14:14:21 zookeeper_1 | [2024-04-09 14:11:55,626] INFO zookeeper.commitLogCount=500 (org.apache.zookeeper.server.ZKDatabase) 14:14:21 zookeeper_1 | [2024-04-09 14:11:55,630] INFO zookeeper.snapshot.compression.method = CHECKED (org.apache.zookeeper.server.persistence.SnapStream) 14:14:21 zookeeper_1 | [2024-04-09 14:11:55,630] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) 14:14:21 zookeeper_1 | [2024-04-09 14:11:55,633] INFO Snapshot loaded in 6 ms, highest zxid is 0x0, digest is 1371985504 (org.apache.zookeeper.server.ZKDatabase) 14:14:21 zookeeper_1 | [2024-04-09 14:11:55,634] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) 14:14:21 zookeeper_1 | [2024-04-09 14:11:55,634] INFO Snapshot taken in 0 ms (org.apache.zookeeper.server.ZooKeeperServer) 14:14:21 zookeeper_1 | [2024-04-09 14:11:55,642] INFO PrepRequestProcessor (sid:0) started, reconfigEnabled=false (org.apache.zookeeper.server.PrepRequestProcessor) 14:14:21 zookeeper_1 | [2024-04-09 14:11:55,643] INFO zookeeper.request_throttler.shutdownTimeout = 10000 ms (org.apache.zookeeper.server.RequestThrottler) 14:14:21 zookeeper_1 | [2024-04-09 14:11:55,656] INFO Using checkIntervalMs=60000 maxPerMinute=10000 maxNeverUsedIntervalMs=0 (org.apache.zookeeper.server.ContainerManager) 14:14:21 zookeeper_1 | [2024-04-09 14:11:55,657] INFO ZooKeeper audit is disabled. (org.apache.zookeeper.audit.ZKAuditProvider) 14:14:21 zookeeper_1 | [2024-04-09 14:11:56,911] INFO Creating new log file: log.1 (org.apache.zookeeper.server.persistence.FileTxnLog) 14:14:21 kafka | [2024-04-09 14:11:58,173] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient) 14:14:21 kafka | [2024-04-09 14:11:58,177] INFO Opening socket connection to server zookeeper/172.17.0.4:2181. (org.apache.zookeeper.ClientCnxn) 14:14:21 kafka | [2024-04-09 14:11:58,183] INFO Socket connection established, initiating session, client: /172.17.0.9:37668, server: zookeeper/172.17.0.4:2181 (org.apache.zookeeper.ClientCnxn) 14:14:21 kafka | [2024-04-09 14:11:58,191] INFO Session establishment complete on server zookeeper/172.17.0.4:2181, session id = 0x100000445790001, negotiated timeout = 18000 (org.apache.zookeeper.ClientCnxn) 14:14:21 kafka | [2024-04-09 14:11:58,200] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient) 14:14:21 kafka | [2024-04-09 14:11:58,508] INFO Cluster ID = TupwFhGQQjGmvCIddVeH4w (kafka.server.KafkaServer) 14:14:21 kafka | [2024-04-09 14:11:58,511] WARN No meta.properties file under dir /var/lib/kafka/data/meta.properties (kafka.server.BrokerMetadataCheckpoint) 14:14:21 kafka | [2024-04-09 14:11:58,555] INFO KafkaConfig values: 14:14:21 kafka | advertised.listeners = PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092 14:14:21 kafka | alter.config.policy.class.name = null 14:14:21 kafka | alter.log.dirs.replication.quota.window.num = 11 14:14:21 kafka | alter.log.dirs.replication.quota.window.size.seconds = 1 14:14:21 kafka | authorizer.class.name = 14:14:21 kafka | auto.create.topics.enable = true 14:14:21 kafka | auto.include.jmx.reporter = true 14:14:21 kafka | auto.leader.rebalance.enable = true 14:14:21 kafka | background.threads = 10 14:14:21 kafka | broker.heartbeat.interval.ms = 2000 14:14:21 kafka | broker.id = 1 14:14:21 kafka | broker.id.generation.enable = true 14:14:21 kafka | broker.rack = null 14:14:21 kafka | broker.session.timeout.ms = 9000 14:14:21 kafka | client.quota.callback.class = null 14:14:21 kafka | compression.type = producer 14:14:21 kafka | connection.failed.authentication.delay.ms = 100 14:14:21 kafka | connections.max.idle.ms = 600000 14:14:21 kafka | connections.max.reauth.ms = 0 14:14:21 kafka | control.plane.listener.name = null 14:14:21 kafka | controlled.shutdown.enable = true 14:14:21 kafka | controlled.shutdown.max.retries = 3 14:14:21 kafka | controlled.shutdown.retry.backoff.ms = 5000 14:14:21 kafka | controller.listener.names = null 14:14:21 kafka | controller.quorum.append.linger.ms = 25 14:14:21 kafka | controller.quorum.election.backoff.max.ms = 1000 14:14:21 kafka | controller.quorum.election.timeout.ms = 1000 14:14:21 kafka | controller.quorum.fetch.timeout.ms = 2000 14:14:21 kafka | controller.quorum.request.timeout.ms = 2000 14:14:21 kafka | controller.quorum.retry.backoff.ms = 20 14:14:21 kafka | controller.quorum.voters = [] 14:14:21 kafka | controller.quota.window.num = 11 14:14:21 kafka | controller.quota.window.size.seconds = 1 14:14:21 kafka | controller.socket.timeout.ms = 30000 14:14:21 kafka | create.topic.policy.class.name = null 14:14:21 kafka | default.replication.factor = 1 14:14:21 kafka | delegation.token.expiry.check.interval.ms = 3600000 14:14:21 kafka | delegation.token.expiry.time.ms = 86400000 14:14:21 kafka | delegation.token.master.key = null 14:14:21 kafka | delegation.token.max.lifetime.ms = 604800000 14:14:21 kafka | delegation.token.secret.key = null 14:14:21 kafka | delete.records.purgatory.purge.interval.requests = 1 14:14:21 kafka | delete.topic.enable = true 14:14:21 kafka | early.start.listeners = null 14:14:21 kafka | fetch.max.bytes = 57671680 14:14:21 kafka | fetch.purgatory.purge.interval.requests = 1000 14:14:21 kafka | group.consumer.assignors = [org.apache.kafka.coordinator.group.assignor.RangeAssignor] 14:14:21 kafka | group.consumer.heartbeat.interval.ms = 5000 14:14:21 kafka | group.consumer.max.heartbeat.interval.ms = 15000 14:14:21 kafka | group.consumer.max.session.timeout.ms = 60000 14:14:21 kafka | group.consumer.max.size = 2147483647 14:14:21 kafka | group.consumer.min.heartbeat.interval.ms = 5000 14:14:21 kafka | group.consumer.min.session.timeout.ms = 45000 14:14:21 kafka | group.consumer.session.timeout.ms = 45000 14:14:21 kafka | group.coordinator.new.enable = false 14:14:21 kafka | group.coordinator.threads = 1 14:14:21 kafka | group.initial.rebalance.delay.ms = 3000 14:14:21 kafka | group.max.session.timeout.ms = 1800000 14:14:21 kafka | group.max.size = 2147483647 14:14:21 kafka | group.min.session.timeout.ms = 6000 14:14:21 kafka | initial.broker.registration.timeout.ms = 60000 14:14:21 kafka | inter.broker.listener.name = PLAINTEXT 14:14:21 kafka | inter.broker.protocol.version = 3.6-IV2 14:14:21 kafka | kafka.metrics.polling.interval.secs = 10 14:14:21 kafka | kafka.metrics.reporters = [] 14:14:21 kafka | leader.imbalance.check.interval.seconds = 300 14:14:21 kafka | leader.imbalance.per.broker.percentage = 10 14:14:21 kafka | listener.security.protocol.map = PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT 14:14:21 kafka | listeners = PLAINTEXT://0.0.0.0:9092,PLAINTEXT_HOST://0.0.0.0:29092 14:14:21 kafka | log.cleaner.backoff.ms = 15000 14:14:21 kafka | log.cleaner.dedupe.buffer.size = 134217728 14:14:21 kafka | log.cleaner.delete.retention.ms = 86400000 14:14:21 kafka | log.cleaner.enable = true 14:14:21 kafka | log.cleaner.io.buffer.load.factor = 0.9 14:14:21 kafka | log.cleaner.io.buffer.size = 524288 14:14:21 kafka | log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 14:14:21 kafka | log.cleaner.max.compaction.lag.ms = 9223372036854775807 14:14:21 kafka | log.cleaner.min.cleanable.ratio = 0.5 14:14:21 kafka | log.cleaner.min.compaction.lag.ms = 0 14:14:21 kafka | log.cleaner.threads = 1 14:14:21 kafka | log.cleanup.policy = [delete] 14:14:21 kafka | log.dir = /tmp/kafka-logs 14:14:21 kafka | log.dirs = /var/lib/kafka/data 14:14:21 kafka | log.flush.interval.messages = 9223372036854775807 14:14:21 kafka | log.flush.interval.ms = null 14:14:21 kafka | log.flush.offset.checkpoint.interval.ms = 60000 14:14:21 kafka | log.flush.scheduler.interval.ms = 9223372036854775807 14:14:21 kafka | log.flush.start.offset.checkpoint.interval.ms = 60000 14:14:21 kafka | log.index.interval.bytes = 4096 14:14:21 kafka | log.index.size.max.bytes = 10485760 14:14:21 kafka | log.local.retention.bytes = -2 14:14:21 kafka | log.local.retention.ms = -2 14:14:21 kafka | log.message.downconversion.enable = true 14:14:21 kafka | log.message.format.version = 3.0-IV1 14:14:21 kafka | log.message.timestamp.after.max.ms = 9223372036854775807 14:14:21 kafka | log.message.timestamp.before.max.ms = 9223372036854775807 14:14:21 kafka | log.message.timestamp.difference.max.ms = 9223372036854775807 14:14:21 kafka | log.message.timestamp.type = CreateTime 14:14:21 kafka | log.preallocate = false 14:14:21 kafka | log.retention.bytes = -1 14:14:21 kafka | log.retention.check.interval.ms = 300000 14:14:21 kafka | log.retention.hours = 168 14:14:21 kafka | log.retention.minutes = null 14:14:21 kafka | log.retention.ms = null 14:14:21 kafka | log.roll.hours = 168 14:14:21 kafka | log.roll.jitter.hours = 0 14:14:21 kafka | log.roll.jitter.ms = null 14:14:21 kafka | log.roll.ms = null 14:14:21 kafka | log.segment.bytes = 1073741824 14:14:21 kafka | log.segment.delete.delay.ms = 60000 14:14:21 kafka | max.connection.creation.rate = 2147483647 14:14:21 mariadb | 2024-04-09 14:11:46+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.10.2+maria~ubu2204 started. 14:14:21 mariadb | 2024-04-09 14:11:46+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql' 14:14:21 mariadb | 2024-04-09 14:11:46+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.10.2+maria~ubu2204 started. 14:14:21 mariadb | 2024-04-09 14:11:46+00:00 [Note] [Entrypoint]: Initializing database files 14:14:21 mariadb | 2024-04-09 14:11:47 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) 14:14:21 mariadb | 2024-04-09 14:11:47 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF 14:14:21 mariadb | 2024-04-09 14:11:47 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. 14:14:21 mariadb | 14:14:21 mariadb | 14:14:21 mariadb | PLEASE REMEMBER TO SET A PASSWORD FOR THE MariaDB root USER ! 14:14:21 mariadb | To do so, start the server, then issue the following command: 14:14:21 mariadb | 14:14:21 mariadb | '/usr/bin/mysql_secure_installation' 14:14:21 mariadb | 14:14:21 mariadb | which will also give you the option of removing the test 14:14:21 mariadb | databases and anonymous user created by default. This is 14:14:21 mariadb | strongly recommended for production servers. 14:14:21 mariadb | 14:14:21 mariadb | See the MariaDB Knowledgebase at https://mariadb.com/kb 14:14:21 mariadb | 14:14:21 mariadb | Please report any problems at https://mariadb.org/jira 14:14:21 mariadb | 14:14:21 mariadb | The latest information about MariaDB is available at https://mariadb.org/. 14:14:21 mariadb | 14:14:21 mariadb | Consider joining MariaDB's strong and vibrant community: 14:14:21 mariadb | https://mariadb.org/get-involved/ 14:14:21 mariadb | 14:14:21 mariadb | 2024-04-09 14:11:48+00:00 [Note] [Entrypoint]: Database files initialized 14:14:21 mariadb | 2024-04-09 14:11:48+00:00 [Note] [Entrypoint]: Starting temporary server 14:14:21 mariadb | 2024-04-09 14:11:48+00:00 [Note] [Entrypoint]: Waiting for server startup 14:14:21 mariadb | 2024-04-09 14:11:48 0 [Note] mariadbd (server 10.10.2-MariaDB-1:10.10.2+maria~ubu2204) starting as process 97 ... 14:14:21 mariadb | 2024-04-09 14:11:48 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.039161518Z level=info msg="Migration successfully executed" id="add index api_key.account_id_name" duration=1.41597ms 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.042408455Z level=info msg="Executing migration" id="drop index IDX_api_key_account_id - v1" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.043067665Z level=info msg="Migration successfully executed" id="drop index IDX_api_key_account_id - v1" duration=661.27µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.051625149Z level=info msg="Executing migration" id="drop index UQE_api_key_key - v1" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.053156911Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_key - v1" duration=1.535352ms 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.058030381Z level=info msg="Executing migration" id="drop index UQE_api_key_account_id_name - v1" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.058792982Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_account_id_name - v1" duration=745.921µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.063409059Z level=info msg="Executing migration" id="Rename table api_key to api_key_v1 - v1" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.074408709Z level=info msg="Migration successfully executed" id="Rename table api_key to api_key_v1 - v1" duration=11.000079ms 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.079814557Z level=info msg="Executing migration" id="create api_key table v2" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.080783111Z level=info msg="Migration successfully executed" id="create api_key table v2" duration=968.634µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.088020965Z level=info msg="Executing migration" id="create index IDX_api_key_org_id - v2" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.089108821Z level=info msg="Migration successfully executed" id="create index IDX_api_key_org_id - v2" duration=1.087626ms 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.127990514Z level=info msg="Executing migration" id="create index UQE_api_key_key - v2" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.130251917Z level=info msg="Migration successfully executed" id="create index UQE_api_key_key - v2" duration=2.263373ms 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.140079359Z level=info msg="Executing migration" id="create index UQE_api_key_org_id_name - v2" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.141173665Z level=info msg="Migration successfully executed" id="create index UQE_api_key_org_id_name - v2" duration=1.088936ms 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.147659649Z level=info msg="Executing migration" id="copy api_key v1 to v2" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.148981818Z level=info msg="Migration successfully executed" id="copy api_key v1 to v2" duration=1.32216ms 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.154323125Z level=info msg="Executing migration" id="Drop old table api_key_v1" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.155133557Z level=info msg="Migration successfully executed" id="Drop old table api_key_v1" duration=810.462µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.165255063Z level=info msg="Executing migration" id="Update api_key table charset" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.165285874Z level=info msg="Migration successfully executed" id="Update api_key table charset" duration=33.641µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.175209827Z level=info msg="Executing migration" id="Add expires to api_key table" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.180688977Z level=info msg="Migration successfully executed" id="Add expires to api_key table" duration=5.48121ms 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.191380201Z level=info msg="Executing migration" id="Add service account foreign key" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.19471207Z level=info msg="Migration successfully executed" id="Add service account foreign key" duration=3.331188ms 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.203139962Z level=info msg="Executing migration" id="set service account foreign key to nil if 0" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.203663499Z level=info msg="Migration successfully executed" id="set service account foreign key to nil if 0" duration=521.437µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.20717595Z level=info msg="Executing migration" id="Add last_used_at to api_key table" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.213449061Z level=info msg="Migration successfully executed" id="Add last_used_at to api_key table" duration=6.272161ms 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.21960851Z level=info msg="Executing migration" id="Add is_revoked column to api_key table" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.222052675Z level=info msg="Migration successfully executed" id="Add is_revoked column to api_key table" duration=2.443775ms 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.22785689Z level=info msg="Executing migration" id="create dashboard_snapshot table v4" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.228903665Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v4" duration=1.046736ms 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.232628268Z level=info msg="Executing migration" id="drop table dashboard_snapshot_v4 #1" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.233476951Z level=info msg="Migration successfully executed" id="drop table dashboard_snapshot_v4 #1" duration=849.913µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.241431786Z level=info msg="Executing migration" id="create dashboard_snapshot table v5 #2" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.244158095Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v5 #2" duration=2.725979ms 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.248466138Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_key - v5" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.249739756Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_key - v5" duration=1.274668ms 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.25620871Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_delete_key - v5" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.25756831Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_delete_key - v5" duration=1.358659ms 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.265132769Z level=info msg="Executing migration" id="create index IDX_dashboard_snapshot_user_id - v5" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.265956951Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_snapshot_user_id - v5" duration=823.992µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.276151779Z level=info msg="Executing migration" id="alter dashboard_snapshot to mediumtext v2" 14:14:21 mariadb | 2024-04-09 14:11:48 0 [Note] InnoDB: Number of transaction pools: 1 14:14:21 mariadb | 2024-04-09 14:11:48 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions 14:14:21 mariadb | 2024-04-09 14:11:48 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts) 14:14:21 mariadb | 2024-04-09 14:11:48 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) 14:14:21 mariadb | 2024-04-09 14:11:48 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF 14:14:21 mariadb | 2024-04-09 14:11:48 0 [Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB 14:14:21 mariadb | 2024-04-09 14:11:48 0 [Note] InnoDB: Completed initialization of buffer pool 14:14:21 mariadb | 2024-04-09 14:11:48 0 [Note] InnoDB: File system buffers for log disabled (block size=512 bytes) 14:14:21 mariadb | 2024-04-09 14:11:48 0 [Note] InnoDB: 128 rollback segments are active. 14:14:21 mariadb | 2024-04-09 14:11:48 0 [Note] InnoDB: Setting file './ibtmp1' size to 12.000MiB. Physically writing the file full; Please wait ... 14:14:21 mariadb | 2024-04-09 14:11:48 0 [Note] InnoDB: File './ibtmp1' size is now 12.000MiB. 14:14:21 mariadb | 2024-04-09 14:11:48 0 [Note] InnoDB: log sequence number 46590; transaction id 14 14:14:21 mariadb | 2024-04-09 14:11:48 0 [Note] Plugin 'FEEDBACK' is disabled. 14:14:21 mariadb | 2024-04-09 14:11:48 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. 14:14:21 mariadb | 2024-04-09 14:11:48 0 [Warning] 'user' entry 'root@mariadb' ignored in --skip-name-resolve mode. 14:14:21 mariadb | 2024-04-09 14:11:48 0 [Warning] 'proxies_priv' entry '@% root@mariadb' ignored in --skip-name-resolve mode. 14:14:21 mariadb | 2024-04-09 14:11:48 0 [Note] mariadbd: ready for connections. 14:14:21 mariadb | Version: '10.10.2-MariaDB-1:10.10.2+maria~ubu2204' socket: '/run/mysqld/mysqld.sock' port: 0 mariadb.org binary distribution 14:14:21 mariadb | 2024-04-09 14:11:49+00:00 [Note] [Entrypoint]: Temporary server started. 14:14:21 mariadb | 2024-04-09 14:11:51+00:00 [Note] [Entrypoint]: Creating user policy_user 14:14:21 mariadb | 2024-04-09 14:11:51+00:00 [Note] [Entrypoint]: Securing system users (equivalent to running mysql_secure_installation) 14:14:21 mariadb | 14:14:21 mariadb | 2024-04-09 14:11:51+00:00 [Warn] [Entrypoint]: /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/db.conf 14:14:21 mariadb | 14:14:21 mariadb | 2024-04-09 14:11:51+00:00 [Note] [Entrypoint]: /usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/db.sh 14:14:21 mariadb | #!/bin/bash -xv 14:14:21 mariadb | # Copyright 2019,2021 AT&T Intellectual Property. All rights reserved 14:14:21 mariadb | # Modifications Copyright (c) 2022 Nordix Foundation. 14:14:21 mariadb | # 14:14:21 mariadb | # Licensed under the Apache License, Version 2.0 (the "License"); 14:14:21 mariadb | # you may not use this file except in compliance with the License. 14:14:21 mariadb | # You may obtain a copy of the License at 14:14:21 mariadb | # 14:14:21 mariadb | # http://www.apache.org/licenses/LICENSE-2.0 14:14:21 mariadb | # 14:14:21 mariadb | # Unless required by applicable law or agreed to in writing, software 14:14:21 mariadb | # distributed under the License is distributed on an "AS IS" BASIS, 14:14:21 mariadb | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 14:14:21 mariadb | # See the License for the specific language governing permissions and 14:14:21 mariadb | # limitations under the License. 14:14:21 mariadb | 14:14:21 mariadb | for db in migration pooling policyadmin operationshistory clampacm policyclamp 14:14:21 mariadb | do 14:14:21 mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "CREATE DATABASE IF NOT EXISTS ${db};" 14:14:21 mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "GRANT ALL PRIVILEGES ON \`${db}\`.* TO '${MYSQL_USER}'@'%' ;" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.276485963Z level=info msg="Migration successfully executed" id="alter dashboard_snapshot to mediumtext v2" duration=192.123µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.283911091Z level=info msg="Executing migration" id="Update dashboard_snapshot table charset" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.283982812Z level=info msg="Migration successfully executed" id="Update dashboard_snapshot table charset" duration=74.492µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.289118196Z level=info msg="Executing migration" id="Add column external_delete_url to dashboard_snapshots table" 14:14:21 kafka | max.connections = 2147483647 14:14:21 kafka | max.connections.per.ip = 2147483647 14:14:21 kafka | max.connections.per.ip.overrides = 14:14:21 kafka | max.incremental.fetch.session.cache.slots = 1000 14:14:21 kafka | message.max.bytes = 1048588 14:14:21 kafka | metadata.log.dir = null 14:14:21 kafka | metadata.log.max.record.bytes.between.snapshots = 20971520 14:14:21 kafka | metadata.log.max.snapshot.interval.ms = 3600000 14:14:21 kafka | metadata.log.segment.bytes = 1073741824 14:14:21 kafka | metadata.log.segment.min.bytes = 8388608 14:14:21 kafka | metadata.log.segment.ms = 604800000 14:14:21 kafka | metadata.max.idle.interval.ms = 500 14:14:21 kafka | metadata.max.retention.bytes = 104857600 14:14:21 kafka | metadata.max.retention.ms = 604800000 14:14:21 kafka | metric.reporters = [] 14:14:21 kafka | metrics.num.samples = 2 14:14:21 kafka | metrics.recording.level = INFO 14:14:21 kafka | metrics.sample.window.ms = 30000 14:14:21 kafka | min.insync.replicas = 1 14:14:21 kafka | node.id = 1 14:14:21 kafka | num.io.threads = 8 14:14:21 kafka | num.network.threads = 3 14:14:21 mariadb | done 14:14:21 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 14:14:21 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS migration;' 14:14:21 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `migration`.* TO '\''policy_user'\''@'\''%'\'' ;' 14:14:21 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 14:14:21 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS pooling;' 14:14:21 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `pooling`.* TO '\''policy_user'\''@'\''%'\'' ;' 14:14:21 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 14:14:21 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS policyadmin;' 14:14:21 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `policyadmin`.* TO '\''policy_user'\''@'\''%'\'' ;' 14:14:21 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 14:14:21 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS operationshistory;' 14:14:21 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `operationshistory`.* TO '\''policy_user'\''@'\''%'\'' ;' 14:14:21 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 14:14:21 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS clampacm;' 14:14:21 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `clampacm`.* TO '\''policy_user'\''@'\''%'\'' ;' 14:14:21 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 14:14:21 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS policyclamp;' 14:14:21 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `policyclamp`.* TO '\''policy_user'\''@'\''%'\'' ;' 14:14:21 mariadb | 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.296138638Z level=info msg="Migration successfully executed" id="Add column external_delete_url to dashboard_snapshots table" duration=7.021912ms 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.301317042Z level=info msg="Executing migration" id="Add encrypted dashboard json column" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.304882494Z level=info msg="Migration successfully executed" id="Add encrypted dashboard json column" duration=3.565692ms 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.310449415Z level=info msg="Executing migration" id="Change dashboard_encrypted column to MEDIUMBLOB" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.310511886Z level=info msg="Migration successfully executed" id="Change dashboard_encrypted column to MEDIUMBLOB" duration=63.451µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.316223598Z level=info msg="Executing migration" id="create quota table v1" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.317459386Z level=info msg="Migration successfully executed" id="create quota table v1" duration=1.233708ms 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.324448777Z level=info msg="Executing migration" id="create index UQE_quota_org_id_user_id_target - v1" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.32535443Z level=info msg="Migration successfully executed" id="create index UQE_quota_org_id_user_id_target - v1" duration=907.283µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.333010351Z level=info msg="Executing migration" id="Update quota table charset" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.333038902Z level=info msg="Migration successfully executed" id="Update quota table charset" duration=29.241µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.337642798Z level=info msg="Executing migration" id="create plugin_setting table" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.339342733Z level=info msg="Migration successfully executed" id="create plugin_setting table" duration=1.699145ms 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.345872247Z level=info msg="Executing migration" id="create index UQE_plugin_setting_org_id_plugin_id - v1" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.346812991Z level=info msg="Migration successfully executed" id="create index UQE_plugin_setting_org_id_plugin_id - v1" duration=939.734µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.352733157Z level=info msg="Executing migration" id="Add column plugin_version to plugin_settings" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.357384824Z level=info msg="Migration successfully executed" id="Add column plugin_version to plugin_settings" duration=4.649117ms 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.361413062Z level=info msg="Executing migration" id="Update plugin_setting table charset" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.361440553Z level=info msg="Migration successfully executed" id="Update plugin_setting table charset" duration=27.701µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.364568328Z level=info msg="Executing migration" id="create session table" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.365875587Z level=info msg="Migration successfully executed" id="create session table" duration=1.306869ms 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.371331946Z level=info msg="Executing migration" id="Drop old table playlist table" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.371468878Z level=info msg="Migration successfully executed" id="Drop old table playlist table" duration=138.512µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.374287739Z level=info msg="Executing migration" id="Drop old table playlist_item table" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.374417531Z level=info msg="Migration successfully executed" id="Drop old table playlist_item table" duration=129.252µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.37714161Z level=info msg="Executing migration" id="create playlist table v2" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.378342257Z level=info msg="Migration successfully executed" id="create playlist table v2" duration=1.199607ms 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.381609335Z level=info msg="Executing migration" id="create playlist item table v2" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.383065206Z level=info msg="Migration successfully executed" id="create playlist item table v2" duration=1.455351ms 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.389105613Z level=info msg="Executing migration" id="Update playlist table charset" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.389194544Z level=info msg="Migration successfully executed" id="Update playlist table charset" duration=90.831µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.392147797Z level=info msg="Executing migration" id="Update playlist_item table charset" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.392234358Z level=info msg="Migration successfully executed" id="Update playlist_item table charset" duration=87.701µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.394713374Z level=info msg="Executing migration" id="Add playlist column created_at" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.399580335Z level=info msg="Migration successfully executed" id="Add playlist column created_at" duration=4.861101ms 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.403892217Z level=info msg="Executing migration" id="Add playlist column updated_at" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.406949031Z level=info msg="Migration successfully executed" id="Add playlist column updated_at" duration=3.056344ms 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.446555145Z level=info msg="Executing migration" id="drop preferences table v2" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.446848729Z level=info msg="Migration successfully executed" id="drop preferences table v2" duration=292.935µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.45035143Z level=info msg="Executing migration" id="drop preferences table v3" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.450531962Z level=info msg="Migration successfully executed" id="drop preferences table v3" duration=179.442µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.454326587Z level=info msg="Executing migration" id="create preferences table v3" 14:14:21 mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "FLUSH PRIVILEGES;" 14:14:21 mariadb | + mysql -uroot -psecret --execute 'FLUSH PRIVILEGES;' 14:14:21 mariadb | mysql -u"${MYSQL_USER}" -p"${MYSQL_PASSWORD}" -f policyclamp < /tmp/policy-clamp-create-tables.sql 14:14:21 mariadb | + mysql -upolicy_user -ppolicy_user -f policyclamp 14:14:21 mariadb | 14:14:21 mariadb | 2024-04-09 14:11:52+00:00 [Note] [Entrypoint]: Stopping temporary server 14:14:21 mariadb | 2024-04-09 14:11:52 0 [Note] mariadbd (initiated by: unknown): Normal shutdown 14:14:21 mariadb | 2024-04-09 14:11:52 0 [Note] InnoDB: FTS optimize thread exiting. 14:14:21 mariadb | 2024-04-09 14:11:52 0 [Note] InnoDB: Starting shutdown... 14:14:21 mariadb | 2024-04-09 14:11:52 0 [Note] InnoDB: Dumping buffer pool(s) to /var/lib/mysql/ib_buffer_pool 14:14:21 mariadb | 2024-04-09 14:11:52 0 [Note] InnoDB: Buffer pool(s) dump completed at 240409 14:11:52 14:14:21 mariadb | 2024-04-09 14:11:52 0 [Note] InnoDB: Removed temporary tablespace data file: "./ibtmp1" 14:14:21 mariadb | 2024-04-09 14:11:52 0 [Note] InnoDB: Shutdown completed; log sequence number 328914; transaction id 298 14:14:21 mariadb | 2024-04-09 14:11:52 0 [Note] mariadbd: Shutdown complete 14:14:21 mariadb | 14:14:21 mariadb | 2024-04-09 14:11:52+00:00 [Note] [Entrypoint]: Temporary server stopped 14:14:21 mariadb | 14:14:21 mariadb | 2024-04-09 14:11:52+00:00 [Note] [Entrypoint]: MariaDB init process done. Ready for start up. 14:14:21 mariadb | 14:14:21 mariadb | 2024-04-09 14:11:52 0 [Note] mariadbd (server 10.10.2-MariaDB-1:10.10.2+maria~ubu2204) starting as process 1 ... 14:14:21 mariadb | 2024-04-09 14:11:52 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 14:14:21 mariadb | 2024-04-09 14:11:52 0 [Note] InnoDB: Number of transaction pools: 1 14:14:21 mariadb | 2024-04-09 14:11:52 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions 14:14:21 mariadb | 2024-04-09 14:11:52 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts) 14:14:21 mariadb | 2024-04-09 14:11:52 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) 14:14:21 mariadb | 2024-04-09 14:11:52 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF 14:14:21 mariadb | 2024-04-09 14:11:52 0 [Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB 14:14:21 mariadb | 2024-04-09 14:11:52 0 [Note] InnoDB: Completed initialization of buffer pool 14:14:21 mariadb | 2024-04-09 14:11:52 0 [Note] InnoDB: File system buffers for log disabled (block size=512 bytes) 14:14:21 mariadb | 2024-04-09 14:11:52 0 [Note] InnoDB: 128 rollback segments are active. 14:14:21 mariadb | 2024-04-09 14:11:52 0 [Note] InnoDB: Setting file './ibtmp1' size to 12.000MiB. Physically writing the file full; Please wait ... 14:14:21 mariadb | 2024-04-09 14:11:52 0 [Note] InnoDB: File './ibtmp1' size is now 12.000MiB. 14:14:21 mariadb | 2024-04-09 14:11:52 0 [Note] InnoDB: log sequence number 328914; transaction id 299 14:14:21 mariadb | 2024-04-09 14:11:52 0 [Note] InnoDB: Loading buffer pool(s) from /var/lib/mysql/ib_buffer_pool 14:14:21 mariadb | 2024-04-09 14:11:52 0 [Note] Plugin 'FEEDBACK' is disabled. 14:14:21 mariadb | 2024-04-09 14:11:52 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. 14:14:21 mariadb | 2024-04-09 14:11:52 0 [Warning] You need to use --log-bin to make --expire-logs-days or --binlog-expire-logs-seconds work. 14:14:21 mariadb | 2024-04-09 14:11:52 0 [Note] Server socket created on IP: '0.0.0.0'. 14:14:21 mariadb | 2024-04-09 14:11:52 0 [Note] Server socket created on IP: '::'. 14:14:21 mariadb | 2024-04-09 14:11:52 0 [Note] mariadbd: ready for connections. 14:14:21 mariadb | Version: '10.10.2-MariaDB-1:10.10.2+maria~ubu2204' socket: '/run/mysqld/mysqld.sock' port: 3306 mariadb.org binary distribution 14:14:21 mariadb | 2024-04-09 14:11:53 0 [Note] InnoDB: Buffer pool(s) load completed at 240409 14:11:52 14:14:21 mariadb | 2024-04-09 14:11:53 3 [Warning] Aborted connection 3 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.7' (This connection closed normally without authentication) 14:14:21 mariadb | 2024-04-09 14:11:53 4 [Warning] Aborted connection 4 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.6' (This connection closed normally without authentication) 14:14:21 mariadb | 2024-04-09 14:11:53 6 [Warning] Aborted connection 6 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.10' (This connection closed normally without authentication) 14:14:21 mariadb | 2024-04-09 14:11:54 64 [Warning] Aborted connection 64 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.11' (This connection closed normally without authentication) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.455746238Z level=info msg="Migration successfully executed" id="create preferences table v3" duration=1.419681ms 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.46075394Z level=info msg="Executing migration" id="Update preferences table charset" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.460790751Z level=info msg="Migration successfully executed" id="Update preferences table charset" duration=38.031µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.46353474Z level=info msg="Executing migration" id="Add column team_id in preferences" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.468429761Z level=info msg="Migration successfully executed" id="Add column team_id in preferences" duration=4.894841ms 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.539005432Z level=info msg="Executing migration" id="Update team_id column values in preferences" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.539235766Z level=info msg="Migration successfully executed" id="Update team_id column values in preferences" duration=233.544µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.542115508Z level=info msg="Executing migration" id="Add column week_start in preferences" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.546991868Z level=info msg="Migration successfully executed" id="Add column week_start in preferences" duration=4.87454ms 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.551396392Z level=info msg="Executing migration" id="Add column preferences.json_data" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.554771721Z level=info msg="Migration successfully executed" id="Add column preferences.json_data" duration=3.375929ms 14:14:21 policy-db-migrator | Waiting for mariadb port 3306... 14:14:21 policy-db-migrator | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused 14:14:21 policy-db-migrator | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused 14:14:21 policy-db-migrator | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused 14:14:21 policy-db-migrator | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused 14:14:21 policy-db-migrator | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused 14:14:21 policy-db-migrator | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused 14:14:21 policy-db-migrator | Connection to mariadb (172.17.0.3) 3306 port [tcp/mysql] succeeded! 14:14:21 policy-db-migrator | 321 blocks 14:14:21 policy-db-migrator | Preparing upgrade release version: 0800 14:14:21 policy-db-migrator | Preparing upgrade release version: 0900 14:14:21 policy-db-migrator | Preparing upgrade release version: 1000 14:14:21 policy-db-migrator | Preparing upgrade release version: 1100 14:14:21 policy-db-migrator | Preparing upgrade release version: 1200 14:14:21 policy-db-migrator | Preparing upgrade release version: 1300 14:14:21 policy-db-migrator | Done 14:14:21 policy-db-migrator | name version 14:14:21 policy-db-migrator | policyadmin 0 14:14:21 policy-db-migrator | policyadmin: upgrade available: 0 -> 1300 14:14:21 policy-db-migrator | upgrade: 0 -> 1300 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | > upgrade 0100-jpapdpgroup_properties.sql 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpgroup_properties (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PROPERTIES VARCHAR(255) DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) DEFAULT NULL) 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.557755224Z level=info msg="Executing migration" id="alter preferences.json_data to mediumtext v1" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.557842315Z level=info msg="Migration successfully executed" id="alter preferences.json_data to mediumtext v1" duration=87.611µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.563405106Z level=info msg="Executing migration" id="Add preferences index org_id" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.564699854Z level=info msg="Migration successfully executed" id="Add preferences index org_id" duration=1.293878ms 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.568872185Z level=info msg="Executing migration" id="Add preferences index user_id" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.569826479Z level=info msg="Migration successfully executed" id="Add preferences index user_id" duration=953.864µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.577644062Z level=info msg="Executing migration" id="create alert table v1" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.578978131Z level=info msg="Migration successfully executed" id="create alert table v1" duration=1.333039ms 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.587506504Z level=info msg="Executing migration" id="add index alert org_id & id " 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.588981556Z level=info msg="Migration successfully executed" id="add index alert org_id & id " duration=1.478992ms 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.59821724Z level=info msg="Executing migration" id="add index alert state" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.59961397Z level=info msg="Migration successfully executed" id="add index alert state" duration=1.39707ms 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.607121328Z level=info msg="Executing migration" id="add index alert dashboard_id" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.608144053Z level=info msg="Migration successfully executed" id="add index alert dashboard_id" duration=1.023375ms 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.615023383Z level=info msg="Executing migration" id="Create alert_rule_tag table v1" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.616061808Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v1" duration=1.036845ms 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.622973148Z level=info msg="Executing migration" id="Add unique index alert_rule_tag.alert_id_tag_id" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.624443369Z level=info msg="Migration successfully executed" id="Add unique index alert_rule_tag.alert_id_tag_id" duration=1.472301ms 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.6321292Z level=info msg="Executing migration" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.633133335Z level=info msg="Migration successfully executed" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" duration=1.003505ms 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.641164091Z level=info msg="Executing migration" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.652023018Z level=info msg="Migration successfully executed" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" duration=10.858647ms 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.699599077Z level=info msg="Executing migration" id="Create alert_rule_tag table v2" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.700671312Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v2" duration=1.075695ms 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.705014545Z level=info msg="Executing migration" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.705905868Z level=info msg="Migration successfully executed" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" duration=888.663µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.710668247Z level=info msg="Executing migration" id="copy alert_rule_tag v1 to v2" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.710975381Z level=info msg="Migration successfully executed" id="copy alert_rule_tag v1 to v2" duration=307.384µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.714689035Z level=info msg="Executing migration" id="drop table alert_rule_tag_v1" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.715566578Z level=info msg="Migration successfully executed" id="drop table alert_rule_tag_v1" duration=878.243µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.719026308Z level=info msg="Executing migration" id="create alert_notification table v1" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.71988884Z level=info msg="Migration successfully executed" id="create alert_notification table v1" duration=862.212µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.724858222Z level=info msg="Executing migration" id="Add column is_default" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.728575976Z level=info msg="Migration successfully executed" id="Add column is_default" duration=3.716874ms 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.731949385Z level=info msg="Executing migration" id="Add column frequency" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.735653768Z level=info msg="Migration successfully executed" id="Add column frequency" duration=3.702463ms 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.739938411Z level=info msg="Executing migration" id="Add column send_reminder" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.742931624Z level=info msg="Migration successfully executed" id="Add column send_reminder" duration=2.993003ms 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.746739369Z level=info msg="Executing migration" id="Add column disable_resolve_message" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.753244613Z level=info msg="Migration successfully executed" id="Add column disable_resolve_message" duration=6.502454ms 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.756204316Z level=info msg="Executing migration" id="add index alert_notification org_id & name" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.756849106Z level=info msg="Migration successfully executed" id="add index alert_notification org_id & name" duration=644.379µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.759349002Z level=info msg="Executing migration" id="Update alert table charset" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.759381202Z level=info msg="Migration successfully executed" id="Update alert table charset" duration=32.64µs 14:14:21 policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpstatistics_enginestats (AVERAGEEXECUTIONTIME DOUBLE DEFAULT NULL, ENGINEID VARCHAR(255) DEFAULT NULL, ENGINETIMESTAMP BIGINT DEFAULT NULL, ENGINEWORKERSTATE INT DEFAULT NULL, EVENTCOUNT BIGINT DEFAULT NULL, LASTENTERTIME BIGINT DEFAULT NULL, LASTEXECUTIONTIME BIGINT DEFAULT NULL, LASTSTART BIGINT DEFAULT NULL, UPTIME BIGINT DEFAULT NULL, timeStamp datetime DEFAULT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL) 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | > upgrade 0120-jpapdpsubgroup_policies.sql 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_policies (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL) 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | > upgrade 0130-jpapdpsubgroup_properties.sql 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_properties (parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL, PROPERTIES VARCHAR(255) DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) DEFAULT NULL) 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | > upgrade 0140-jpapdpsubgroup_supportedpolicytypes.sql 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_supportedpolicytypes (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL) 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | > upgrade 0150-jpatoscacapabilityassignment_attributes.sql 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_attributes (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, ATTRIBUTES LONGTEXT DEFAULT NULL, ATTRIBUTES_KEY VARCHAR(255) DEFAULT NULL) 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | > upgrade 0160-jpatoscacapabilityassignment_metadata.sql 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | > upgrade 0170-jpatoscacapabilityassignment_occurrences.sql 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_occurrences (name VARCHAR(120) NULL, version VARCHAR(20) NULL, OCCURRENCES INT DEFAULT NULL) 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | > upgrade 0180-jpatoscacapabilityassignment_properties.sql 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | > upgrade 0190-jpatoscacapabilitytype_metadata.sql 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilitytype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | > upgrade 0200-jpatoscacapabilitytype_properties.sql 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilitytype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | > upgrade 0210-jpatoscadatatype_constraints.sql 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_constraints (name VARCHAR(120) NULL, version VARCHAR(20) NULL, CONSTRAINTS VARCHAR(255) NULL) 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | > upgrade 0220-jpatoscadatatype_metadata.sql 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.762551828Z level=info msg="Executing migration" id="Update alert_notification table charset" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.762570698Z level=info msg="Migration successfully executed" id="Update alert_notification table charset" duration=19.59µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.769963625Z level=info msg="Executing migration" id="create notification_journal table v1" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.770806557Z level=info msg="Migration successfully executed" id="create notification_journal table v1" duration=844.852µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.775029619Z level=info msg="Executing migration" id="add index notification_journal org_id & alert_id & notifier_id" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.776060574Z level=info msg="Migration successfully executed" id="add index notification_journal org_id & alert_id & notifier_id" duration=1.031635ms 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.780455697Z level=info msg="Executing migration" id="drop alert_notification_journal" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.781228458Z level=info msg="Migration successfully executed" id="drop alert_notification_journal" duration=773.811µs 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | > upgrade 0230-jpatoscadatatype_properties.sql 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | > upgrade 0240-jpatoscanodetemplate_metadata.sql 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetemplate_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | > upgrade 0250-jpatoscanodetemplate_properties.sql 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetemplate_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | > upgrade 0260-jpatoscanodetype_metadata.sql 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | > upgrade 0270-jpatoscanodetype_properties.sql 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | > upgrade 0280-jpatoscapolicy_metadata.sql 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.785078104Z level=info msg="Executing migration" id="create alert_notification_state table v1" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.785957447Z level=info msg="Migration successfully executed" id="create alert_notification_state table v1" duration=879.573µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.789820563Z level=info msg="Executing migration" id="add index alert_notification_state org_id & alert_id & notifier_id" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.790916738Z level=info msg="Migration successfully executed" id="add index alert_notification_state org_id & alert_id & notifier_id" duration=1.096116ms 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.794625122Z level=info msg="Executing migration" id="Add for to alert table" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.798370136Z level=info msg="Migration successfully executed" id="Add for to alert table" duration=3.744464ms 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.805192775Z level=info msg="Executing migration" id="Add column uid in alert_notification" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.81104557Z level=info msg="Migration successfully executed" id="Add column uid in alert_notification" duration=5.852125ms 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.847913282Z level=info msg="Executing migration" id="Update uid column values in alert_notification" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.848372509Z level=info msg="Migration successfully executed" id="Update uid column values in alert_notification" duration=466.437µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.852111273Z level=info msg="Executing migration" id="Add unique index alert_notification_org_id_uid" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.855392471Z level=info msg="Migration successfully executed" id="Add unique index alert_notification_org_id_uid" duration=3.281138ms 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.862004946Z level=info msg="Executing migration" id="Remove unique index org_id_name" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.863510138Z level=info msg="Migration successfully executed" id="Remove unique index org_id_name" duration=1.604423ms 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.869122999Z level=info msg="Executing migration" id="Add column secure_settings in alert_notification" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.873056486Z level=info msg="Migration successfully executed" id="Add column secure_settings in alert_notification" duration=3.937057ms 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.877319298Z level=info msg="Executing migration" id="alter alert.settings to mediumtext" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.877388849Z level=info msg="Migration successfully executed" id="alter alert.settings to mediumtext" duration=70.171µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.882450432Z level=info msg="Executing migration" id="Add non-unique index alert_notification_state_alert_id" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.883093002Z level=info msg="Migration successfully executed" id="Add non-unique index alert_notification_state_alert_id" duration=642.52µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.885975233Z level=info msg="Executing migration" id="Add non-unique index alert_rule_tag_alert_id" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.887664728Z level=info msg="Migration successfully executed" id="Add non-unique index alert_rule_tag_alert_id" duration=1.689645ms 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.892096372Z level=info msg="Executing migration" id="Drop old annotation table v4" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.892270474Z level=info msg="Migration successfully executed" id="Drop old annotation table v4" duration=174.492µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.898474374Z level=info msg="Executing migration" id="create annotation table v5" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.899905005Z level=info msg="Migration successfully executed" id="create annotation table v5" duration=1.430211ms 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.902952999Z level=info msg="Executing migration" id="add index annotation 0 v3" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.90443635Z level=info msg="Migration successfully executed" id="add index annotation 0 v3" duration=1.483261ms 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.90786061Z level=info msg="Executing migration" id="add index annotation 1 v3" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.90927637Z level=info msg="Migration successfully executed" id="add index annotation 1 v3" duration=1.41662ms 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.914425785Z level=info msg="Executing migration" id="add index annotation 2 v3" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.915304328Z level=info msg="Migration successfully executed" id="add index annotation 2 v3" duration=878.683µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.918586295Z level=info msg="Executing migration" id="add index annotation 3 v3" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.91959432Z level=info msg="Migration successfully executed" id="add index annotation 3 v3" duration=1.007575ms 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.954962742Z level=info msg="Executing migration" id="add index annotation 4 v3" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.956603425Z level=info msg="Migration successfully executed" id="add index annotation 4 v3" duration=1.640313ms 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.962233147Z level=info msg="Executing migration" id="Update annotation table charset" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.962259367Z level=info msg="Migration successfully executed" id="Update annotation table charset" duration=27.06µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.965422483Z level=info msg="Executing migration" id="Add column region_id to annotation table" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.971535472Z level=info msg="Migration successfully executed" id="Add column region_id to annotation table" duration=6.111238ms 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.975231285Z level=info msg="Executing migration" id="Drop category_id index" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.975795113Z level=info msg="Migration successfully executed" id="Drop category_id index" duration=565.618µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.981675588Z level=info msg="Executing migration" id="Add column tags to annotation table" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.985465333Z level=info msg="Migration successfully executed" id="Add column tags to annotation table" duration=3.786675ms 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.990286863Z level=info msg="Executing migration" id="Create annotation_tag table v2" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.991357098Z level=info msg="Migration successfully executed" id="Create annotation_tag table v2" duration=1.069395ms 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.995043822Z level=info msg="Executing migration" id="Add unique index annotation_tag.annotation_id_tag_id" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:52.995961985Z level=info msg="Migration successfully executed" id="Add unique index annotation_tag.annotation_id_tag_id" duration=918.103µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.00183317Z level=info msg="Executing migration" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.002682382Z level=info msg="Migration successfully executed" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" duration=849.462µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.006288284Z level=info msg="Executing migration" id="Rename table annotation_tag to annotation_tag_v2 - v2" 14:14:21 kafka | num.partitions = 1 14:14:21 kafka | num.recovery.threads.per.data.dir = 1 14:14:21 kafka | num.replica.alter.log.dirs.threads = null 14:14:21 kafka | num.replica.fetchers = 1 14:14:21 kafka | offset.metadata.max.bytes = 4096 14:14:21 kafka | offsets.commit.required.acks = -1 14:14:21 kafka | offsets.commit.timeout.ms = 5000 14:14:21 kafka | offsets.load.buffer.size = 5242880 14:14:21 kafka | offsets.retention.check.interval.ms = 600000 14:14:21 kafka | offsets.retention.minutes = 10080 14:14:21 kafka | offsets.topic.compression.codec = 0 14:14:21 kafka | offsets.topic.num.partitions = 50 14:14:21 kafka | offsets.topic.replication.factor = 1 14:14:21 kafka | offsets.topic.segment.bytes = 104857600 14:14:21 kafka | password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding 14:14:21 kafka | password.encoder.iterations = 4096 14:14:21 kafka | password.encoder.key.length = 128 14:14:21 kafka | password.encoder.keyfactory.algorithm = null 14:14:21 kafka | password.encoder.old.secret = null 14:14:21 kafka | password.encoder.secret = null 14:14:21 kafka | principal.builder.class = class org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder 14:14:21 kafka | process.roles = [] 14:14:21 kafka | producer.id.expiration.check.interval.ms = 600000 14:14:21 kafka | producer.id.expiration.ms = 86400000 14:14:21 kafka | producer.purgatory.purge.interval.requests = 1000 14:14:21 kafka | queued.max.request.bytes = -1 14:14:21 kafka | queued.max.requests = 500 14:14:21 kafka | quota.window.num = 11 14:14:21 kafka | quota.window.size.seconds = 1 14:14:21 kafka | remote.log.index.file.cache.total.size.bytes = 1073741824 14:14:21 kafka | remote.log.manager.task.interval.ms = 30000 14:14:21 kafka | remote.log.manager.task.retry.backoff.max.ms = 30000 14:14:21 kafka | remote.log.manager.task.retry.backoff.ms = 500 14:14:21 kafka | remote.log.manager.task.retry.jitter = 0.2 14:14:21 kafka | remote.log.manager.thread.pool.size = 10 14:14:21 kafka | remote.log.metadata.custom.metadata.max.bytes = 128 14:14:21 kafka | remote.log.metadata.manager.class.name = org.apache.kafka.server.log.remote.metadata.storage.TopicBasedRemoteLogMetadataManager 14:14:21 kafka | remote.log.metadata.manager.class.path = null 14:14:21 kafka | remote.log.metadata.manager.impl.prefix = rlmm.config. 14:14:21 kafka | remote.log.metadata.manager.listener.name = null 14:14:21 kafka | remote.log.reader.max.pending.tasks = 100 14:14:21 kafka | remote.log.reader.threads = 10 14:14:21 kafka | remote.log.storage.manager.class.name = null 14:14:21 kafka | remote.log.storage.manager.class.path = null 14:14:21 kafka | remote.log.storage.manager.impl.prefix = rsm.config. 14:14:21 kafka | remote.log.storage.system.enable = false 14:14:21 kafka | replica.fetch.backoff.ms = 1000 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | > upgrade 0290-jpatoscapolicy_properties.sql 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | > upgrade 0300-jpatoscapolicy_targets.sql 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_targets (name VARCHAR(120) NULL, version VARCHAR(20) NULL) 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | > upgrade 0310-jpatoscapolicytype_metadata.sql 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | > upgrade 0320-jpatoscapolicytype_properties.sql 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | > upgrade 0330-jpatoscapolicytype_targets.sql 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_targets (name VARCHAR(120) NULL, version VARCHAR(20) NULL) 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | > upgrade 0340-jpatoscapolicytype_triggers.sql 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_triggers (name VARCHAR(120) NULL, version VARCHAR(20) NULL, TRIGGERS VARCHAR(255) NULL) 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | > upgrade 0350-jpatoscaproperty_constraints.sql 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaproperty_constraints (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, CONSTRAINTS VARCHAR(255) NULL) 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | 14:14:21 kafka | replica.fetch.max.bytes = 1048576 14:14:21 kafka | replica.fetch.min.bytes = 1 14:14:21 kafka | replica.fetch.response.max.bytes = 10485760 14:14:21 kafka | replica.fetch.wait.max.ms = 500 14:14:21 kafka | replica.high.watermark.checkpoint.interval.ms = 5000 14:14:21 kafka | replica.lag.time.max.ms = 30000 14:14:21 kafka | replica.selector.class = null 14:14:21 kafka | replica.socket.receive.buffer.bytes = 65536 14:14:21 kafka | replica.socket.timeout.ms = 30000 14:14:21 kafka | replication.quota.window.num = 11 14:14:21 kafka | replication.quota.window.size.seconds = 1 14:14:21 kafka | request.timeout.ms = 30000 14:14:21 kafka | reserved.broker.max.id = 1000 14:14:21 kafka | sasl.client.callback.handler.class = null 14:14:21 kafka | sasl.enabled.mechanisms = [GSSAPI] 14:14:21 kafka | sasl.jaas.config = null 14:14:21 kafka | sasl.kerberos.kinit.cmd = /usr/bin/kinit 14:14:21 kafka | sasl.kerberos.min.time.before.relogin = 60000 14:14:21 kafka | sasl.kerberos.principal.to.local.rules = [DEFAULT] 14:14:21 kafka | sasl.kerberos.service.name = null 14:14:21 kafka | sasl.kerberos.ticket.renew.jitter = 0.05 14:14:21 kafka | sasl.kerberos.ticket.renew.window.factor = 0.8 14:14:21 kafka | sasl.login.callback.handler.class = null 14:14:21 kafka | sasl.login.class = null 14:14:21 kafka | sasl.login.connect.timeout.ms = null 14:14:21 kafka | sasl.login.read.timeout.ms = null 14:14:21 kafka | sasl.login.refresh.buffer.seconds = 300 14:14:21 kafka | sasl.login.refresh.min.period.seconds = 60 14:14:21 kafka | sasl.login.refresh.window.factor = 0.8 14:14:21 kafka | sasl.login.refresh.window.jitter = 0.05 14:14:21 kafka | sasl.login.retry.backoff.max.ms = 10000 14:14:21 kafka | sasl.login.retry.backoff.ms = 100 14:14:21 kafka | sasl.mechanism.controller.protocol = GSSAPI 14:14:21 kafka | sasl.mechanism.inter.broker.protocol = GSSAPI 14:14:21 kafka | sasl.oauthbearer.clock.skew.seconds = 30 14:14:21 kafka | sasl.oauthbearer.expected.audience = null 14:14:21 kafka | sasl.oauthbearer.expected.issuer = null 14:14:21 kafka | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 14:14:21 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 14:14:21 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 14:14:21 kafka | sasl.oauthbearer.jwks.endpoint.url = null 14:14:21 kafka | sasl.oauthbearer.scope.claim.name = scope 14:14:21 kafka | sasl.oauthbearer.sub.claim.name = sub 14:14:21 kafka | sasl.oauthbearer.token.endpoint.url = null 14:14:21 kafka | sasl.server.callback.handler.class = null 14:14:21 kafka | sasl.server.max.receive.size = 524288 14:14:21 kafka | security.inter.broker.protocol = PLAINTEXT 14:14:21 kafka | security.providers = null 14:14:21 kafka | server.max.startup.time.ms = 9223372036854775807 14:14:21 kafka | socket.connection.setup.timeout.max.ms = 30000 14:14:21 kafka | socket.connection.setup.timeout.ms = 10000 14:14:21 kafka | socket.listen.backlog.size = 50 14:14:21 kafka | socket.receive.buffer.bytes = 102400 14:14:21 kafka | socket.request.max.bytes = 104857600 14:14:21 kafka | socket.send.buffer.bytes = 102400 14:14:21 kafka | ssl.cipher.suites = [] 14:14:21 kafka | ssl.client.auth = none 14:14:21 kafka | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 14:14:21 kafka | ssl.endpoint.identification.algorithm = https 14:14:21 kafka | ssl.engine.factory.class = null 14:14:21 kafka | ssl.key.password = null 14:14:21 kafka | ssl.keymanager.algorithm = SunX509 14:14:21 kafka | ssl.keystore.certificate.chain = null 14:14:21 kafka | ssl.keystore.key = null 14:14:21 kafka | ssl.keystore.location = null 14:14:21 kafka | ssl.keystore.password = null 14:14:21 kafka | ssl.keystore.type = JKS 14:14:21 kafka | ssl.principal.mapping.rules = DEFAULT 14:14:21 kafka | ssl.protocol = TLSv1.3 14:14:21 kafka | ssl.provider = null 14:14:21 kafka | ssl.secure.random.implementation = null 14:14:21 kafka | ssl.trustmanager.algorithm = PKIX 14:14:21 kafka | ssl.truststore.certificates = null 14:14:21 kafka | ssl.truststore.location = null 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | > upgrade 0360-jpatoscaproperty_metadata.sql 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaproperty_metadata (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | > upgrade 0370-jpatoscarelationshiptype_metadata.sql 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarelationshiptype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | > upgrade 0380-jpatoscarelationshiptype_properties.sql 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarelationshiptype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | > upgrade 0390-jpatoscarequirement_metadata.sql 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | > upgrade 0400-jpatoscarequirement_occurrences.sql 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_occurrences (name VARCHAR(120) NULL, version VARCHAR(20) NULL, OCCURRENCES INT DEFAULT NULL) 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | > upgrade 0410-jpatoscarequirement_properties.sql 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | > upgrade 0420-jpatoscaservicetemplate_metadata.sql 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaservicetemplate_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | > upgrade 0430-jpatoscatopologytemplate_inputs.sql 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscatopologytemplate_inputs (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, INPUTS LONGBLOB DEFAULT NULL, INPUTS_KEY VARCHAR(255) NULL) 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | > upgrade 0440-pdpgroup_pdpsubgroup.sql 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpgroup_pdpsubgroup (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPGROUP_PDPSUBGROUP (name, version, parentLocalName, localName, parentKeyVersion, parentKeyName)) 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | > upgrade 0450-pdpgroup.sql 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpgroup (`DESCRIPTION` VARCHAR(255) NULL, PDPGROUPSTATE INT DEFAULT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_PDPGROUP (name, version)) 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | > upgrade 0460-pdppolicystatus.sql 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdppolicystatus (DEPLOY BOOLEAN DEFAULT 0, PDPGROUP VARCHAR(255) DEFAULT NULL, PDPTYPE VARCHAR(255) DEFAULT NULL, STATE INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PRIMARY KEY PK_PDPPOLICYSTATUS (parentLocalName, localName, parentKeyVersion, parentKeyName)) 14:14:21 policy-apex-pdp | Waiting for mariadb port 3306... 14:14:21 policy-apex-pdp | mariadb (172.17.0.3:3306) open 14:14:21 policy-apex-pdp | Waiting for kafka port 9092... 14:14:21 policy-apex-pdp | kafka (172.17.0.9:9092) open 14:14:21 policy-apex-pdp | Waiting for pap port 6969... 14:14:21 policy-apex-pdp | pap (172.17.0.10:6969) open 14:14:21 policy-apex-pdp | apexApps.sh: running application 'onappf' with command 'java -Dlogback.configurationFile=/opt/app/policy/apex-pdp/etc/logback.xml -cp /opt/app/policy/apex-pdp/etc:/opt/app/policy/apex-pdp/etc/hazelcast:/opt/app/policy/apex-pdp/etc/infinispan:/opt/app/policy/apex-pdp/lib/* -Djavax.net.ssl.keyStore=/opt/app/policy/apex-pdp/etc/ssl/policy-keystore -Djavax.net.ssl.keyStorePassword=Pol1cy_0nap -Djavax.net.ssl.trustStore=/opt/app/policy/apex-pdp/etc/ssl/policy-truststore -Djavax.net.ssl.trustStorePassword=Pol1cy_0nap -Dlogback.configurationFile=/opt/app/policy/apex-pdp/etc/logback.xml -Dhazelcast.config=/opt/app/policy/apex-pdp/etc/hazelcast.xml -Dhazelcast.mancenter.enabled=false org.onap.policy.apex.services.onappf.ApexStarterMain -c /opt/app/policy/apex-pdp/etc/onappf/config/OnapPfConfig.json' 14:14:21 policy-apex-pdp | [2024-04-09T14:12:25.816+00:00|INFO|ApexStarterMain|main] In ApexStarter with parameters [-c, /opt/app/policy/apex-pdp/etc/onappf/config/OnapPfConfig.json] 14:14:21 policy-apex-pdp | [2024-04-09T14:12:25.969+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 14:14:21 policy-apex-pdp | allow.auto.create.topics = true 14:14:21 policy-apex-pdp | auto.commit.interval.ms = 5000 14:14:21 policy-apex-pdp | auto.include.jmx.reporter = true 14:14:21 policy-apex-pdp | auto.offset.reset = latest 14:14:21 policy-apex-pdp | bootstrap.servers = [kafka:9092] 14:14:21 policy-apex-pdp | check.crcs = true 14:14:21 policy-apex-pdp | client.dns.lookup = use_all_dns_ips 14:14:21 policy-apex-pdp | client.id = consumer-5bf355d1-b191-4690-8ff2-dd6842394381-1 14:14:21 policy-apex-pdp | client.rack = 14:14:21 policy-apex-pdp | connections.max.idle.ms = 540000 14:14:21 policy-apex-pdp | default.api.timeout.ms = 60000 14:14:21 policy-apex-pdp | enable.auto.commit = true 14:14:21 policy-apex-pdp | exclude.internal.topics = true 14:14:21 policy-apex-pdp | fetch.max.bytes = 52428800 14:14:21 policy-apex-pdp | fetch.max.wait.ms = 500 14:14:21 policy-apex-pdp | fetch.min.bytes = 1 14:14:21 policy-apex-pdp | group.id = 5bf355d1-b191-4690-8ff2-dd6842394381 14:14:21 policy-apex-pdp | group.instance.id = null 14:14:21 policy-apex-pdp | heartbeat.interval.ms = 3000 14:14:21 policy-apex-pdp | interceptor.classes = [] 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.017637965Z level=info msg="Migration successfully executed" id="Rename table annotation_tag to annotation_tag_v2 - v2" duration=11.346971ms 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.020517308Z level=info msg="Executing migration" id="Create annotation_tag table v3" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.021021228Z level=info msg="Migration successfully executed" id="Create annotation_tag table v3" duration=503.95µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.024864158Z level=info msg="Executing migration" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.02553384Z level=info msg="Migration successfully executed" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" duration=669.472µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.027961915Z level=info msg="Executing migration" id="copy annotation_tag v2 to v3" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.028408973Z level=info msg="Migration successfully executed" id="copy annotation_tag v2 to v3" duration=446.758µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.031263856Z level=info msg="Executing migration" id="drop table annotation_tag_v2" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.032081051Z level=info msg="Migration successfully executed" id="drop table annotation_tag_v2" duration=817.985µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.037014261Z level=info msg="Executing migration" id="Update alert annotations and set TEXT to empty" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.037213955Z level=info msg="Migration successfully executed" id="Update alert annotations and set TEXT to empty" duration=197.424µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.040690249Z level=info msg="Executing migration" id="Add created time to annotation table" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.044904676Z level=info msg="Migration successfully executed" id="Add created time to annotation table" duration=4.213857ms 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.047463153Z level=info msg="Executing migration" id="Add updated time to annotation table" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.051524478Z level=info msg="Migration successfully executed" id="Add updated time to annotation table" duration=4.058874ms 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.055330077Z level=info msg="Executing migration" id="Add index for created in annotation table" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.056296275Z level=info msg="Migration successfully executed" id="Add index for created in annotation table" duration=965.888µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.059170668Z level=info msg="Executing migration" id="Add index for updated in annotation table" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.060121845Z level=info msg="Migration successfully executed" id="Add index for updated in annotation table" duration=951.077µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.063044459Z level=info msg="Executing migration" id="Convert existing annotations from seconds to milliseconds" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.063303274Z level=info msg="Migration successfully executed" id="Convert existing annotations from seconds to milliseconds" duration=258.785µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.067043652Z level=info msg="Executing migration" id="Add epoch_end column" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.071183798Z level=info msg="Migration successfully executed" id="Add epoch_end column" duration=4.139846ms 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.074473899Z level=info msg="Executing migration" id="Add index for epoch_end" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.0756069Z level=info msg="Migration successfully executed" id="Add index for epoch_end" duration=1.136271ms 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.078769698Z level=info msg="Executing migration" id="Make epoch_end the same as epoch" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.078974121Z level=info msg="Migration successfully executed" id="Make epoch_end the same as epoch" duration=215.924µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.08273362Z level=info msg="Executing migration" id="Move region to single row" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.083135548Z level=info msg="Migration successfully executed" id="Move region to single row" duration=401.518µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.085927819Z level=info msg="Executing migration" id="Remove index org_id_epoch from annotation table" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.086808395Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch from annotation table" duration=880.616µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.089611517Z level=info msg="Executing migration" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.090468442Z level=info msg="Migration successfully executed" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" duration=857.005µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.095678638Z level=info msg="Executing migration" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.096611545Z level=info msg="Migration successfully executed" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" duration=930.897µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.099687172Z level=info msg="Executing migration" id="Add index for org_id_epoch_end_epoch on annotation table" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.100576658Z level=info msg="Migration successfully executed" id="Add index for org_id_epoch_end_epoch on annotation table" duration=888.906µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.104284336Z level=info msg="Executing migration" id="Remove index org_id_epoch_epoch_end from annotation table" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.105100911Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch_epoch_end from annotation table" duration=816.685µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.107913783Z level=info msg="Executing migration" id="Add index for alert_id on annotation table" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.108788419Z level=info msg="Migration successfully executed" id="Add index for alert_id on annotation table" duration=874.276µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.112279153Z level=info msg="Executing migration" id="Increase tags column to length 4096" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.112358834Z level=info msg="Migration successfully executed" id="Increase tags column to length 4096" duration=78.551µs 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | > upgrade 0470-pdp.sql 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdp (HEALTHY INT DEFAULT NULL, MESSAGE VARCHAR(255) DEFAULT NULL, PDPSTATE INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDP (parentLocalName, localName, parentKeyVersion, parentKeyName)) 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | > upgrade 0480-pdpstatistics.sql 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpstatistics (PDPGROUPNAME VARCHAR(120) NULL, PDPSUBGROUPNAME VARCHAR(120) NULL, POLICYDEPLOYCOUNT BIGINT DEFAULT NULL, POLICYDEPLOYFAILCOUNT BIGINT DEFAULT NULL, POLICYDEPLOYSUCCESSCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDFAILCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDSUCCESSCOUNT BIGINT DEFAULT NULL, timeStamp datetime NOT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_PDPSTATISTICS (timeStamp, name, version)) 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | > upgrade 0490-pdpsubgroup_pdp.sql 14:14:21 policy-db-migrator | -------------- 14:14:21 kafka | ssl.truststore.password = null 14:14:21 kafka | ssl.truststore.type = JKS 14:14:21 kafka | transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000 14:14:21 kafka | transaction.max.timeout.ms = 900000 14:14:21 kafka | transaction.partition.verification.enable = true 14:14:21 kafka | transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 14:14:21 kafka | transaction.state.log.load.buffer.size = 5242880 14:14:21 kafka | transaction.state.log.min.isr = 2 14:14:21 kafka | transaction.state.log.num.partitions = 50 14:14:21 kafka | transaction.state.log.replication.factor = 3 14:14:21 kafka | transaction.state.log.segment.bytes = 104857600 14:14:21 kafka | transactional.id.expiration.ms = 604800000 14:14:21 kafka | unclean.leader.election.enable = false 14:14:21 kafka | unstable.api.versions.enable = false 14:14:21 kafka | zookeeper.clientCnxnSocket = null 14:14:21 kafka | zookeeper.connect = zookeeper:2181 14:14:21 kafka | zookeeper.connection.timeout.ms = null 14:14:21 kafka | zookeeper.max.in.flight.requests = 10 14:14:21 kafka | zookeeper.metadata.migration.enable = false 14:14:21 kafka | zookeeper.session.timeout.ms = 18000 14:14:21 kafka | zookeeper.set.acl = false 14:14:21 kafka | zookeeper.ssl.cipher.suites = null 14:14:21 kafka | zookeeper.ssl.client.enable = false 14:14:21 kafka | zookeeper.ssl.crl.enable = false 14:14:21 kafka | zookeeper.ssl.enabled.protocols = null 14:14:21 kafka | zookeeper.ssl.endpoint.identification.algorithm = HTTPS 14:14:21 kafka | zookeeper.ssl.keystore.location = null 14:14:21 kafka | zookeeper.ssl.keystore.password = null 14:14:21 kafka | zookeeper.ssl.keystore.type = null 14:14:21 kafka | zookeeper.ssl.ocsp.enable = false 14:14:21 kafka | zookeeper.ssl.protocol = TLSv1.2 14:14:21 kafka | zookeeper.ssl.truststore.location = null 14:14:21 kafka | zookeeper.ssl.truststore.password = null 14:14:21 kafka | zookeeper.ssl.truststore.type = null 14:14:21 kafka | (kafka.server.KafkaConfig) 14:14:21 kafka | [2024-04-09 14:11:58,582] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 14:14:21 kafka | [2024-04-09 14:11:58,583] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 14:14:21 kafka | [2024-04-09 14:11:58,584] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 14:14:21 kafka | [2024-04-09 14:11:58,586] INFO [ThrottledChannelReaper-ControllerMutation]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 14:14:21 kafka | [2024-04-09 14:11:58,612] INFO Loading logs from log dirs ArraySeq(/var/lib/kafka/data) (kafka.log.LogManager) 14:14:21 kafka | [2024-04-09 14:11:58,616] INFO No logs found to be loaded in /var/lib/kafka/data (kafka.log.LogManager) 14:14:21 kafka | [2024-04-09 14:11:58,625] INFO Loaded 0 logs in 13ms (kafka.log.LogManager) 14:14:21 kafka | [2024-04-09 14:11:58,627] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager) 14:14:21 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpsubgroup_pdp (pdpParentKeyName VARCHAR(120) NOT NULL, pdpParentKeyVersion VARCHAR(15) NOT NULL, pdpParentLocalName VARCHAR(120) NOT NULL, pdpLocalName VARCHAR(120) NOT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPSUBGROUP_PDP (pdpParentKeyName, pdpParentKeyVersion, pdpParentLocalName, pdpLocalName, parentLocalName, localName, parentKeyVersion, parentKeyName)) 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | > upgrade 0500-pdpsubgroup.sql 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpsubgroup (CURRENTINSTANCECOUNT INT DEFAULT NULL, DESIREDINSTANCECOUNT INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPSUBGROUP (parentLocalName, localName, parentKeyVersion, parentKeyName)) 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | > upgrade 0510-toscacapabilityassignment.sql 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignment (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENT(name, version)) 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | > upgrade 0520-toscacapabilityassignments.sql 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignments (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENTS (name, version)) 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | > upgrade 0530-toscacapabilityassignments_toscacapabilityassignment.sql 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignments_toscacapabilityassignment (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENTS_TOSCACAPABILITYASSIGNMENT (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | > upgrade 0540-toscacapabilitytype.sql 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPE (name, version)) 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | > upgrade 0550-toscacapabilitytypes.sql 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPES (name, version)) 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | > upgrade 0560-toscacapabilitytypes_toscacapabilitytype.sql 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytypes_toscacapabilitytype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPES_TOSCACAPABILITYTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | > upgrade 0570-toscadatatype.sql 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCADATATYPE (name, version)) 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | > upgrade 0580-toscadatatypes.sql 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCADATATYPES (name, version)) 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | > upgrade 0590-toscadatatypes_toscadatatype.sql 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatypes_toscadatatype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCADATATYPES_TOSCADATATYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | > upgrade 0600-toscanodetemplate.sql 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplate (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, capabilitiesName VARCHAR(120) NULL, capabilitiesVersion VARCHAR(20) NULL, requirementsVersion VARCHAR(20) NULL, requirementsName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCANODETEMPLATE (name, version)) 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | > upgrade 0610-toscanodetemplates.sql 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplates (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCANODETEMPLATES (name, version)) 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | > upgrade 0620-toscanodetemplates_toscanodetemplate.sql 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplates_toscanodetemplate (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCANODETEMPLATES_TOSCANODETEMPLATE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | > upgrade 0630-toscanodetype.sql 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, requirementsVersion VARCHAR(20) NULL, requirementsName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCANODETYPE (name, version)) 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | > upgrade 0640-toscanodetypes.sql 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCANODETYPES (name, version)) 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | > upgrade 0650-toscanodetypes_toscanodetype.sql 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetypes_toscanodetype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCANODETYPES_TOSCANODETYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | > upgrade 0660-toscaparameter.sql 14:14:21 policy-db-migrator | -------------- 14:14:21 kafka | [2024-04-09 14:11:58,628] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager) 14:14:21 kafka | [2024-04-09 14:11:58,637] INFO Starting the log cleaner (kafka.log.LogCleaner) 14:14:21 kafka | [2024-04-09 14:11:58,681] INFO [kafka-log-cleaner-thread-0]: Starting (kafka.log.LogCleaner$CleanerThread) 14:14:21 kafka | [2024-04-09 14:11:58,708] INFO [feature-zk-node-event-process-thread]: Starting (kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread) 14:14:21 kafka | [2024-04-09 14:11:58,720] INFO Feature ZK node at path: /feature does not exist (kafka.server.FinalizedFeatureChangeListener) 14:14:21 kafka | [2024-04-09 14:11:58,743] INFO [zk-broker-1-to-controller-forwarding-channel-manager]: Starting (kafka.server.BrokerToControllerRequestThread) 14:14:21 kafka | [2024-04-09 14:11:59,047] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) 14:14:21 kafka | [2024-04-09 14:11:59,064] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT) (kafka.network.SocketServer) 14:14:21 kafka | [2024-04-09 14:11:59,064] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) 14:14:21 kafka | [2024-04-09 14:11:59,070] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT_HOST) (kafka.network.SocketServer) 14:14:21 kafka | [2024-04-09 14:11:59,074] INFO [zk-broker-1-to-controller-alter-partition-channel-manager]: Starting (kafka.server.BrokerToControllerRequestThread) 14:14:21 kafka | [2024-04-09 14:11:59,094] INFO [ExpirationReaper-1-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 14:14:21 kafka | [2024-04-09 14:11:59,096] INFO [ExpirationReaper-1-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 14:14:21 kafka | [2024-04-09 14:11:59,099] INFO [ExpirationReaper-1-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 14:14:21 kafka | [2024-04-09 14:11:59,099] INFO [ExpirationReaper-1-ElectLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 14:14:21 kafka | [2024-04-09 14:11:59,100] INFO [ExpirationReaper-1-RemoteFetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 14:14:21 kafka | [2024-04-09 14:11:59,112] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler) 14:14:21 kafka | [2024-04-09 14:11:59,112] INFO [AddPartitionsToTxnSenderThread-1]: Starting (kafka.server.AddPartitionsToTxnManager) 14:14:21 kafka | [2024-04-09 14:11:59,133] INFO Creating /brokers/ids/1 (is it secure? false) (kafka.zk.KafkaZkClient) 14:14:21 kafka | [2024-04-09 14:11:59,157] INFO Stat of the created znode at /brokers/ids/1 is: 27,27,1712671919148,1712671919148,1,0,0,72057612383354881,258,0,27 14:14:21 kafka | (kafka.zk.KafkaZkClient) 14:14:21 kafka | [2024-04-09 14:11:59,158] INFO Registered broker 1 at path /brokers/ids/1 with addresses: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092, czxid (broker epoch): 27 (kafka.zk.KafkaZkClient) 14:14:21 kafka | [2024-04-09 14:11:59,284] INFO [ControllerEventThread controllerId=1] Starting (kafka.controller.ControllerEventManager$ControllerEventThread) 14:14:21 kafka | [2024-04-09 14:11:59,290] INFO [ExpirationReaper-1-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 14:14:21 kafka | [2024-04-09 14:11:59,297] INFO [ExpirationReaper-1-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 14:14:21 kafka | [2024-04-09 14:11:59,298] INFO [ExpirationReaper-1-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 14:14:21 kafka | [2024-04-09 14:11:59,301] INFO Successfully created /controller_epoch with initial epoch 0 (kafka.zk.KafkaZkClient) 14:14:21 kafka | [2024-04-09 14:11:59,312] INFO [Controller id=1] 1 successfully elected as the controller. Epoch incremented to 1 and epoch zk version is now 1 (kafka.controller.KafkaController) 14:14:21 kafka | [2024-04-09 14:11:59,315] INFO [GroupCoordinator 1]: Starting up. (kafka.coordinator.group.GroupCoordinator) 14:14:21 kafka | [2024-04-09 14:11:59,318] INFO [Controller id=1] Creating FeatureZNode at path: /feature with contents: FeatureZNode(2,Enabled,Map()) (kafka.controller.KafkaController) 14:14:21 kafka | [2024-04-09 14:11:59,320] INFO [GroupCoordinator 1]: Startup complete. (kafka.coordinator.group.GroupCoordinator) 14:14:21 kafka | [2024-04-09 14:11:59,322] INFO Feature ZK node created at path: /feature (kafka.server.FinalizedFeatureChangeListener) 14:14:21 kafka | [2024-04-09 14:11:59,352] INFO [TransactionCoordinator id=1] Starting up. (kafka.coordinator.transaction.TransactionCoordinator) 14:14:21 kafka | [2024-04-09 14:11:59,355] INFO [MetadataCache brokerId=1] Updated cache from existing None to latest Features(version=3.6-IV2, finalizedFeatures={}, finalizedFeaturesEpoch=0). (kafka.server.metadata.ZkMetadataCache) 14:14:21 kafka | [2024-04-09 14:11:59,355] INFO [TxnMarkerSenderThread-1]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager) 14:14:21 kafka | [2024-04-09 14:11:59,355] INFO [TransactionCoordinator id=1] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator) 14:14:21 kafka | [2024-04-09 14:11:59,356] INFO [Controller id=1] Registering handlers (kafka.controller.KafkaController) 14:14:21 kafka | [2024-04-09 14:11:59,364] INFO [Controller id=1] Deleting log dir event notifications (kafka.controller.KafkaController) 14:14:21 kafka | [2024-04-09 14:11:59,369] INFO [Controller id=1] Deleting isr change notifications (kafka.controller.KafkaController) 14:14:21 kafka | [2024-04-09 14:11:59,373] INFO [Controller id=1] Initializing controller context (kafka.controller.KafkaController) 14:14:21 kafka | [2024-04-09 14:11:59,389] INFO [Controller id=1] Initialized broker epochs cache: HashMap(1 -> 27) (kafka.controller.KafkaController) 14:14:21 kafka | [2024-04-09 14:11:59,389] INFO [ExpirationReaper-1-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 14:14:21 policy-api | Waiting for mariadb port 3306... 14:14:21 policy-api | mariadb (172.17.0.3:3306) open 14:14:21 policy-api | Waiting for policy-db-migrator port 6824... 14:14:21 policy-api | policy-db-migrator (172.17.0.6:6824) open 14:14:21 policy-api | Policy api config file: /opt/app/policy/api/etc/apiParameters.yaml 14:14:21 policy-api | 14:14:21 policy-api | . ____ _ __ _ _ 14:14:21 policy-api | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ 14:14:21 policy-api | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ 14:14:21 policy-api | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) 14:14:21 policy-api | ' |____| .__|_| |_|_| |_\__, | / / / / 14:14:21 policy-api | =========|_|==============|___/=/_/_/_/ 14:14:21 policy-api | :: Spring Boot :: (v3.1.8) 14:14:21 policy-api | 14:14:21 policy-api | [2024-04-09T14:12:01.980+00:00|INFO|PolicyApiApplication|main] Starting PolicyApiApplication using Java 17.0.10 with PID 22 (/app/api.jar started by policy in /opt/app/policy/api/bin) 14:14:21 policy-api | [2024-04-09T14:12:01.982+00:00|INFO|PolicyApiApplication|main] No active profile set, falling back to 1 default profile: "default" 14:14:21 policy-api | [2024-04-09T14:12:03.667+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. 14:14:21 policy-api | [2024-04-09T14:12:03.761+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 84 ms. Found 6 JPA repository interfaces. 14:14:21 policy-api | [2024-04-09T14:12:04.184+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.api.main.exception.ServiceExceptionHandler 14:14:21 policy-api | [2024-04-09T14:12:04.184+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.api.main.exception.ServiceExceptionHandler 14:14:21 policy-api | [2024-04-09T14:12:04.805+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port(s): 6969 (http) 14:14:21 policy-api | [2024-04-09T14:12:04.815+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] 14:14:21 policy-api | [2024-04-09T14:12:04.817+00:00|INFO|StandardService|main] Starting service [Tomcat] 14:14:21 policy-api | [2024-04-09T14:12:04.817+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.18] 14:14:21 policy-api | [2024-04-09T14:12:04.903+00:00|INFO|[/policy/api/v1]|main] Initializing Spring embedded WebApplicationContext 14:14:21 policy-api | [2024-04-09T14:12:04.903+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 2859 ms 14:14:21 policy-api | [2024-04-09T14:12:05.325+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] 14:14:21 policy-api | [2024-04-09T14:12:05.395+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.3.0.CR1 14:14:21 policy-api | [2024-04-09T14:12:05.398+00:00|INFO|Environment|main] HHH000406: Using bytecode reflection optimizer 14:14:21 policy-api | [2024-04-09T14:12:05.442+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled 14:14:21 policy-api | [2024-04-09T14:12:05.800+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer 14:14:21 policy-api | [2024-04-09T14:12:05.819+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... 14:14:21 policy-api | [2024-04-09T14:12:05.914+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.mariadb.jdbc.Connection@63b3ee82 14:14:21 policy-api | [2024-04-09T14:12:05.916+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. 14:14:21 policy-api | [2024-04-09T14:12:07.772+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) 14:14:21 policy-api | [2024-04-09T14:12:07.776+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' 14:14:21 policy-api | [2024-04-09T14:12:08.764+00:00|WARN|ApiDatabaseInitializer|main] Detected multi-versioned type: policytypes/onap.policies.monitoring.tcagen2.v2.yaml 14:14:21 policy-apex-pdp | internal.leave.group.on.close = true 14:14:21 policy-apex-pdp | internal.throw.on.fetch.stable.offset.unsupported = false 14:14:21 policy-apex-pdp | isolation.level = read_uncommitted 14:14:21 policy-apex-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 14:14:21 policy-apex-pdp | max.partition.fetch.bytes = 1048576 14:14:21 policy-apex-pdp | max.poll.interval.ms = 300000 14:14:21 policy-apex-pdp | max.poll.records = 500 14:14:21 policy-apex-pdp | metadata.max.age.ms = 300000 14:14:21 policy-apex-pdp | metric.reporters = [] 14:14:21 policy-apex-pdp | metrics.num.samples = 2 14:14:21 policy-apex-pdp | metrics.recording.level = INFO 14:14:21 policy-apex-pdp | metrics.sample.window.ms = 30000 14:14:21 policy-apex-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 14:14:21 policy-apex-pdp | receive.buffer.bytes = 65536 14:14:21 policy-apex-pdp | reconnect.backoff.max.ms = 1000 14:14:21 policy-apex-pdp | reconnect.backoff.ms = 50 14:14:21 policy-apex-pdp | request.timeout.ms = 30000 14:14:21 policy-apex-pdp | retry.backoff.ms = 100 14:14:21 policy-apex-pdp | sasl.client.callback.handler.class = null 14:14:21 policy-apex-pdp | sasl.jaas.config = null 14:14:21 policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit 14:14:21 policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 14:14:21 policy-apex-pdp | sasl.kerberos.service.name = null 14:14:21 policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 14:14:21 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 14:14:21 policy-apex-pdp | sasl.login.callback.handler.class = null 14:14:21 policy-apex-pdp | sasl.login.class = null 14:14:21 policy-apex-pdp | sasl.login.connect.timeout.ms = null 14:14:21 policy-apex-pdp | sasl.login.read.timeout.ms = null 14:14:21 policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 14:14:21 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 14:14:21 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 14:14:21 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 14:14:21 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 14:14:21 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 14:14:21 policy-apex-pdp | sasl.mechanism = GSSAPI 14:14:21 policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 14:14:21 policy-apex-pdp | sasl.oauthbearer.expected.audience = null 14:14:21 policy-apex-pdp | sasl.oauthbearer.expected.issuer = null 14:14:21 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 14:14:21 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 14:14:21 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 14:14:21 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null 14:14:21 policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope 14:14:21 policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub 14:14:21 policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null 14:14:21 policy-apex-pdp | security.protocol = PLAINTEXT 14:14:21 policy-apex-pdp | security.providers = null 14:14:21 policy-apex-pdp | send.buffer.bytes = 131072 14:14:21 policy-apex-pdp | session.timeout.ms = 45000 14:14:21 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 14:14:21 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 14:14:21 policy-apex-pdp | ssl.cipher.suites = null 14:14:21 policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 14:14:21 policy-apex-pdp | ssl.endpoint.identification.algorithm = https 14:14:21 policy-apex-pdp | ssl.engine.factory.class = null 14:14:21 policy-apex-pdp | ssl.key.password = null 14:14:21 policy-apex-pdp | ssl.keymanager.algorithm = SunX509 14:14:21 policy-apex-pdp | ssl.keystore.certificate.chain = null 14:14:21 policy-apex-pdp | ssl.keystore.key = null 14:14:21 policy-apex-pdp | ssl.keystore.location = null 14:14:21 policy-apex-pdp | ssl.keystore.password = null 14:14:21 policy-apex-pdp | ssl.keystore.type = JKS 14:14:21 policy-apex-pdp | ssl.protocol = TLSv1.3 14:14:21 policy-apex-pdp | ssl.provider = null 14:14:21 policy-apex-pdp | ssl.secure.random.implementation = null 14:14:21 policy-apex-pdp | ssl.trustmanager.algorithm = PKIX 14:14:21 policy-apex-pdp | ssl.truststore.certificates = null 14:14:21 policy-apex-pdp | ssl.truststore.location = null 14:14:21 policy-apex-pdp | ssl.truststore.password = null 14:14:21 policy-apex-pdp | ssl.truststore.type = JKS 14:14:21 policy-apex-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 14:14:21 policy-apex-pdp | 14:14:21 policy-apex-pdp | [2024-04-09T14:12:26.112+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 14:14:21 policy-apex-pdp | [2024-04-09T14:12:26.112+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 14:14:21 policy-apex-pdp | [2024-04-09T14:12:26.112+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1712671946111 14:14:21 policy-apex-pdp | [2024-04-09T14:12:26.114+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-5bf355d1-b191-4690-8ff2-dd6842394381-1, groupId=5bf355d1-b191-4690-8ff2-dd6842394381] Subscribed to topic(s): policy-pdp-pap 14:14:21 policy-apex-pdp | [2024-04-09T14:12:26.125+00:00|INFO|ServiceManager|main] service manager starting 14:14:21 policy-apex-pdp | [2024-04-09T14:12:26.126+00:00|INFO|ServiceManager|main] service manager starting topics 14:14:21 policy-apex-pdp | [2024-04-09T14:12:26.129+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=5bf355d1-b191-4690-8ff2-dd6842394381, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: starting 14:14:21 policy-apex-pdp | [2024-04-09T14:12:26.154+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 14:14:21 policy-apex-pdp | allow.auto.create.topics = true 14:14:21 policy-apex-pdp | auto.commit.interval.ms = 5000 14:14:21 policy-apex-pdp | auto.include.jmx.reporter = true 14:14:21 policy-apex-pdp | auto.offset.reset = latest 14:14:21 policy-apex-pdp | bootstrap.servers = [kafka:9092] 14:14:21 policy-apex-pdp | check.crcs = true 14:14:21 policy-apex-pdp | client.dns.lookup = use_all_dns_ips 14:14:21 policy-apex-pdp | client.id = consumer-5bf355d1-b191-4690-8ff2-dd6842394381-2 14:14:21 policy-apex-pdp | client.rack = 14:14:21 policy-apex-pdp | connections.max.idle.ms = 540000 14:14:21 policy-apex-pdp | default.api.timeout.ms = 60000 14:14:21 policy-apex-pdp | enable.auto.commit = true 14:14:21 policy-apex-pdp | exclude.internal.topics = true 14:14:21 policy-apex-pdp | fetch.max.bytes = 52428800 14:14:21 policy-apex-pdp | fetch.max.wait.ms = 500 14:14:21 policy-apex-pdp | fetch.min.bytes = 1 14:14:21 policy-apex-pdp | group.id = 5bf355d1-b191-4690-8ff2-dd6842394381 14:14:21 policy-apex-pdp | group.instance.id = null 14:14:21 policy-apex-pdp | heartbeat.interval.ms = 3000 14:14:21 policy-apex-pdp | interceptor.classes = [] 14:14:21 policy-apex-pdp | internal.leave.group.on.close = true 14:14:21 policy-apex-pdp | internal.throw.on.fetch.stable.offset.unsupported = false 14:14:21 policy-apex-pdp | isolation.level = read_uncommitted 14:14:21 policy-apex-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 14:14:21 policy-apex-pdp | max.partition.fetch.bytes = 1048576 14:14:21 policy-apex-pdp | max.poll.interval.ms = 300000 14:14:21 policy-apex-pdp | max.poll.records = 500 14:14:21 policy-apex-pdp | metadata.max.age.ms = 300000 14:14:21 policy-apex-pdp | metric.reporters = [] 14:14:21 policy-apex-pdp | metrics.num.samples = 2 14:14:21 policy-apex-pdp | metrics.recording.level = INFO 14:14:21 policy-apex-pdp | metrics.sample.window.ms = 30000 14:14:21 policy-apex-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 14:14:21 policy-apex-pdp | receive.buffer.bytes = 65536 14:14:21 policy-apex-pdp | reconnect.backoff.max.ms = 1000 14:14:21 policy-apex-pdp | reconnect.backoff.ms = 50 14:14:21 policy-apex-pdp | request.timeout.ms = 30000 14:14:21 policy-apex-pdp | retry.backoff.ms = 100 14:14:21 policy-apex-pdp | sasl.client.callback.handler.class = null 14:14:21 policy-apex-pdp | sasl.jaas.config = null 14:14:21 policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit 14:14:21 policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 14:14:21 policy-apex-pdp | sasl.kerberos.service.name = null 14:14:21 policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 14:14:21 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 14:14:21 policy-apex-pdp | sasl.login.callback.handler.class = null 14:14:21 policy-apex-pdp | sasl.login.class = null 14:14:21 policy-apex-pdp | sasl.login.connect.timeout.ms = null 14:14:21 policy-apex-pdp | sasl.login.read.timeout.ms = null 14:14:21 policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 14:14:21 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 14:14:21 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 14:14:21 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 14:14:21 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 14:14:21 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 14:14:21 policy-apex-pdp | sasl.mechanism = GSSAPI 14:14:21 policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 14:14:21 policy-apex-pdp | sasl.oauthbearer.expected.audience = null 14:14:21 policy-apex-pdp | sasl.oauthbearer.expected.issuer = null 14:14:21 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 14:14:21 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 14:14:21 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 14:14:21 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null 14:14:21 policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope 14:14:21 policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub 14:14:21 policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null 14:14:21 policy-apex-pdp | security.protocol = PLAINTEXT 14:14:21 policy-apex-pdp | security.providers = null 14:14:21 policy-apex-pdp | send.buffer.bytes = 131072 14:14:21 policy-apex-pdp | session.timeout.ms = 45000 14:14:21 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 14:14:21 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 14:14:21 policy-apex-pdp | ssl.cipher.suites = null 14:14:21 policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 14:14:21 policy-apex-pdp | ssl.endpoint.identification.algorithm = https 14:14:21 policy-apex-pdp | ssl.engine.factory.class = null 14:14:21 policy-apex-pdp | ssl.key.password = null 14:14:21 policy-apex-pdp | ssl.keymanager.algorithm = SunX509 14:14:21 policy-apex-pdp | ssl.keystore.certificate.chain = null 14:14:21 policy-apex-pdp | ssl.keystore.key = null 14:14:21 policy-apex-pdp | ssl.keystore.location = null 14:14:21 policy-apex-pdp | ssl.keystore.password = null 14:14:21 policy-apex-pdp | ssl.keystore.type = JKS 14:14:21 policy-apex-pdp | ssl.protocol = TLSv1.3 14:14:21 policy-apex-pdp | ssl.provider = null 14:14:21 policy-apex-pdp | ssl.secure.random.implementation = null 14:14:21 policy-apex-pdp | ssl.trustmanager.algorithm = PKIX 14:14:21 policy-apex-pdp | ssl.truststore.certificates = null 14:14:21 policy-apex-pdp | ssl.truststore.location = null 14:14:21 policy-apex-pdp | ssl.truststore.password = null 14:14:21 policy-apex-pdp | ssl.truststore.type = JKS 14:14:21 policy-apex-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 14:14:21 policy-apex-pdp | 14:14:21 policy-apex-pdp | [2024-04-09T14:12:26.161+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 14:14:21 policy-apex-pdp | [2024-04-09T14:12:26.161+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 14:14:21 policy-apex-pdp | [2024-04-09T14:12:26.161+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1712671946161 14:14:21 policy-apex-pdp | [2024-04-09T14:12:26.161+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-5bf355d1-b191-4690-8ff2-dd6842394381-2, groupId=5bf355d1-b191-4690-8ff2-dd6842394381] Subscribed to topic(s): policy-pdp-pap 14:14:21 policy-apex-pdp | [2024-04-09T14:12:26.162+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=be2ac700-46f7-4847-9bf9-d74c80869d4f, alive=false, publisher=null]]: starting 14:14:21 policy-apex-pdp | [2024-04-09T14:12:26.172+00:00|INFO|ProducerConfig|main] ProducerConfig values: 14:14:21 policy-apex-pdp | acks = -1 14:14:21 policy-apex-pdp | auto.include.jmx.reporter = true 14:14:21 policy-apex-pdp | batch.size = 16384 14:14:21 policy-apex-pdp | bootstrap.servers = [kafka:9092] 14:14:21 policy-apex-pdp | buffer.memory = 33554432 14:14:21 policy-apex-pdp | client.dns.lookup = use_all_dns_ips 14:14:21 policy-apex-pdp | client.id = producer-1 14:14:21 policy-apex-pdp | compression.type = none 14:14:21 policy-apex-pdp | connections.max.idle.ms = 540000 14:14:21 policy-apex-pdp | delivery.timeout.ms = 120000 14:14:21 policy-apex-pdp | enable.idempotence = true 14:14:21 policy-apex-pdp | interceptor.classes = [] 14:14:21 policy-apex-pdp | key.serializer = class org.apache.kafka.common.serialization.StringSerializer 14:14:21 policy-apex-pdp | linger.ms = 0 14:14:21 policy-apex-pdp | max.block.ms = 60000 14:14:21 policy-apex-pdp | max.in.flight.requests.per.connection = 5 14:14:21 policy-apex-pdp | max.request.size = 1048576 14:14:21 policy-apex-pdp | metadata.max.age.ms = 300000 14:14:21 policy-apex-pdp | metadata.max.idle.ms = 300000 14:14:21 policy-apex-pdp | metric.reporters = [] 14:14:21 policy-apex-pdp | metrics.num.samples = 2 14:14:21 policy-apex-pdp | metrics.recording.level = INFO 14:14:21 policy-apex-pdp | metrics.sample.window.ms = 30000 14:14:21 policy-apex-pdp | partitioner.adaptive.partitioning.enable = true 14:14:21 policy-apex-pdp | partitioner.availability.timeout.ms = 0 14:14:21 policy-apex-pdp | partitioner.class = null 14:14:21 policy-apex-pdp | partitioner.ignore.keys = false 14:14:21 policy-apex-pdp | receive.buffer.bytes = 32768 14:14:21 policy-apex-pdp | reconnect.backoff.max.ms = 1000 14:14:21 policy-apex-pdp | reconnect.backoff.ms = 50 14:14:21 policy-apex-pdp | request.timeout.ms = 30000 14:14:21 policy-apex-pdp | retries = 2147483647 14:14:21 policy-apex-pdp | retry.backoff.ms = 100 14:14:21 policy-apex-pdp | sasl.client.callback.handler.class = null 14:14:21 policy-apex-pdp | sasl.jaas.config = null 14:14:21 policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit 14:14:21 policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 14:14:21 policy-apex-pdp | sasl.kerberos.service.name = null 14:14:21 policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 14:14:21 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 14:14:21 policy-apex-pdp | sasl.login.callback.handler.class = null 14:14:21 policy-apex-pdp | sasl.login.class = null 14:14:21 policy-apex-pdp | sasl.login.connect.timeout.ms = null 14:14:21 policy-apex-pdp | sasl.login.read.timeout.ms = null 14:14:21 policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 14:14:21 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 14:14:21 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 14:14:21 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 14:14:21 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 14:14:21 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 14:14:21 policy-apex-pdp | sasl.mechanism = GSSAPI 14:14:21 policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 14:14:21 policy-apex-pdp | sasl.oauthbearer.expected.audience = null 14:14:21 policy-apex-pdp | sasl.oauthbearer.expected.issuer = null 14:14:21 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 14:14:21 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 14:14:21 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 14:14:21 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null 14:14:21 policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope 14:14:21 policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub 14:14:21 simulator | Policy simulator config file: /opt/app/policy/simulators/etc/mounted/simParameters.json 14:14:21 simulator | overriding logback.xml 14:14:21 simulator | 2024-04-09 14:11:46,035 INFO replacing 'HOST_NAME' with simulator in /opt/app/policy/simulators/etc/mounted/simParameters.json 14:14:21 simulator | 2024-04-09 14:11:46,092 INFO org.onap.policy.models.simulators starting 14:14:21 simulator | 2024-04-09 14:11:46,093 INFO org.onap.policy.models.simulators starting CDS gRPC Server Properties 14:14:21 simulator | 2024-04-09 14:11:46,271 INFO org.onap.policy.models.simulators starting org.onap.policy.simulators.AaiSimulatorJaxRs_RESOURCE_LOCATION 14:14:21 simulator | 2024-04-09 14:11:46,272 INFO org.onap.policy.models.simulators starting A&AI simulator 14:14:21 simulator | 2024-04-09 14:11:46,375 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-33aeca0b==org.glassfish.jersey.servlet.ServletContainer@bff81822{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@2a2c13a8{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b6b1987{/,null,STOPPED}, connector=A&AI simulator@7d42c224{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-33aeca0b==org.glassfish.jersey.servlet.ServletContainer@bff81822{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 14:14:21 simulator | 2024-04-09 14:11:46,387 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-33aeca0b==org.glassfish.jersey.servlet.ServletContainer@bff81822{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@2a2c13a8{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b6b1987{/,null,STOPPED}, connector=A&AI simulator@7d42c224{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-33aeca0b==org.glassfish.jersey.servlet.ServletContainer@bff81822{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 14:14:21 simulator | 2024-04-09 14:11:46,389 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-33aeca0b==org.glassfish.jersey.servlet.ServletContainer@bff81822{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@2a2c13a8{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b6b1987{/,null,STOPPED}, connector=A&AI simulator@7d42c224{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=Thread[A&AI simulator-6666,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-33aeca0b==org.glassfish.jersey.servlet.ServletContainer@bff81822{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 14:14:21 simulator | 2024-04-09 14:11:46,395 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.10+7-alpine-r0 14:14:21 simulator | 2024-04-09 14:11:46,449 INFO Session workerName=node0 14:14:21 simulator | 2024-04-09 14:11:46,976 INFO Using GSON for REST calls 14:14:21 simulator | 2024-04-09 14:11:47,069 INFO Started o.e.j.s.ServletContextHandler@b6b1987{/,null,AVAILABLE} 14:14:21 simulator | 2024-04-09 14:11:47,080 INFO Started A&AI simulator@7d42c224{HTTP/1.1, (http/1.1)}{0.0.0.0:6666} 14:14:21 simulator | 2024-04-09 14:11:47,091 INFO Started Server@2a2c13a8{STARTING}[11.0.20,sto=0] @1535ms 14:14:21 simulator | 2024-04-09 14:11:47,091 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-33aeca0b==org.glassfish.jersey.servlet.ServletContainer@bff81822{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@2a2c13a8{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b6b1987{/,null,AVAILABLE}, connector=A&AI simulator@7d42c224{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=Thread[A&AI simulator-6666,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-33aeca0b==org.glassfish.jersey.servlet.ServletContainer@bff81822{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4298 ms. 14:14:21 simulator | 2024-04-09 14:11:47,100 INFO org.onap.policy.models.simulators starting SDNC simulator 14:14:21 simulator | 2024-04-09 14:11:47,103 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-45e37a7e==org.glassfish.jersey.servlet.ServletContainer@95a48755{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@62452cc9{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@6941827a{/,null,STOPPED}, connector=SDNC simulator@3e10dc6{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-45e37a7e==org.glassfish.jersey.servlet.ServletContainer@95a48755{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 14:14:21 simulator | 2024-04-09 14:11:47,107 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-45e37a7e==org.glassfish.jersey.servlet.ServletContainer@95a48755{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@62452cc9{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@6941827a{/,null,STOPPED}, connector=SDNC simulator@3e10dc6{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-45e37a7e==org.glassfish.jersey.servlet.ServletContainer@95a48755{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 14:14:21 simulator | 2024-04-09 14:11:47,107 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-45e37a7e==org.glassfish.jersey.servlet.ServletContainer@95a48755{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@62452cc9{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@6941827a{/,null,STOPPED}, connector=SDNC simulator@3e10dc6{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=Thread[SDNC simulator-6668,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-45e37a7e==org.glassfish.jersey.servlet.ServletContainer@95a48755{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 14:14:21 simulator | 2024-04-09 14:11:47,108 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.10+7-alpine-r0 14:14:21 simulator | 2024-04-09 14:11:47,117 INFO Session workerName=node0 14:14:21 simulator | 2024-04-09 14:11:47,171 INFO Using GSON for REST calls 14:14:21 simulator | 2024-04-09 14:11:47,182 INFO Started o.e.j.s.ServletContextHandler@6941827a{/,null,AVAILABLE} 14:14:21 simulator | 2024-04-09 14:11:47,184 INFO Started SDNC simulator@3e10dc6{HTTP/1.1, (http/1.1)}{0.0.0.0:6668} 14:14:21 simulator | 2024-04-09 14:11:47,184 INFO Started Server@62452cc9{STARTING}[11.0.20,sto=0] @1628ms 14:14:21 policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null 14:14:21 policy-apex-pdp | security.protocol = PLAINTEXT 14:14:21 policy-apex-pdp | security.providers = null 14:14:21 policy-apex-pdp | send.buffer.bytes = 131072 14:14:21 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 14:14:21 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 14:14:21 policy-apex-pdp | ssl.cipher.suites = null 14:14:21 policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 14:14:21 policy-apex-pdp | ssl.endpoint.identification.algorithm = https 14:14:21 policy-apex-pdp | ssl.engine.factory.class = null 14:14:21 policy-apex-pdp | ssl.key.password = null 14:14:21 policy-apex-pdp | ssl.keymanager.algorithm = SunX509 14:14:21 policy-apex-pdp | ssl.keystore.certificate.chain = null 14:14:21 policy-apex-pdp | ssl.keystore.key = null 14:14:21 policy-apex-pdp | ssl.keystore.location = null 14:14:21 policy-apex-pdp | ssl.keystore.password = null 14:14:21 policy-apex-pdp | ssl.keystore.type = JKS 14:14:21 policy-apex-pdp | ssl.protocol = TLSv1.3 14:14:21 policy-apex-pdp | ssl.provider = null 14:14:21 policy-apex-pdp | ssl.secure.random.implementation = null 14:14:21 policy-apex-pdp | ssl.trustmanager.algorithm = PKIX 14:14:21 policy-apex-pdp | ssl.truststore.certificates = null 14:14:21 policy-apex-pdp | ssl.truststore.location = null 14:14:21 policy-apex-pdp | ssl.truststore.password = null 14:14:21 policy-apex-pdp | ssl.truststore.type = JKS 14:14:21 policy-apex-pdp | transaction.timeout.ms = 60000 14:14:21 policy-apex-pdp | transactional.id = null 14:14:21 policy-apex-pdp | value.serializer = class org.apache.kafka.common.serialization.StringSerializer 14:14:21 policy-apex-pdp | 14:14:21 policy-apex-pdp | [2024-04-09T14:12:26.180+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. 14:14:21 policy-apex-pdp | [2024-04-09T14:12:26.192+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 14:14:21 policy-apex-pdp | [2024-04-09T14:12:26.192+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 14:14:21 policy-apex-pdp | [2024-04-09T14:12:26.192+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1712671946192 14:14:21 policy-apex-pdp | [2024-04-09T14:12:26.193+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=be2ac700-46f7-4847-9bf9-d74c80869d4f, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created 14:14:21 policy-apex-pdp | [2024-04-09T14:12:26.193+00:00|INFO|ServiceManager|main] service manager starting set alive 14:14:21 policy-apex-pdp | [2024-04-09T14:12:26.193+00:00|INFO|ServiceManager|main] service manager starting register pdp status context object 14:14:21 policy-apex-pdp | [2024-04-09T14:12:26.195+00:00|INFO|ServiceManager|main] service manager starting topic sinks 14:14:21 policy-apex-pdp | [2024-04-09T14:12:26.195+00:00|INFO|ServiceManager|main] service manager starting Pdp Status publisher 14:14:21 policy-apex-pdp | [2024-04-09T14:12:26.196+00:00|INFO|ServiceManager|main] service manager starting Register pdp update listener 14:14:21 policy-apex-pdp | [2024-04-09T14:12:26.196+00:00|INFO|ServiceManager|main] service manager starting Register pdp state change request dispatcher 14:14:21 policy-apex-pdp | [2024-04-09T14:12:26.196+00:00|INFO|ServiceManager|main] service manager starting Message Dispatcher 14:14:21 policy-apex-pdp | [2024-04-09T14:12:26.196+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=5bf355d1-b191-4690-8ff2-dd6842394381, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@e077866 14:14:21 policy-apex-pdp | [2024-04-09T14:12:26.196+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=5bf355d1-b191-4690-8ff2-dd6842394381, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: register: start not attempted 14:14:21 policy-apex-pdp | [2024-04-09T14:12:26.197+00:00|INFO|ServiceManager|main] service manager starting Create REST server 14:14:21 policy-apex-pdp | [2024-04-09T14:12:26.211+00:00|INFO|OrderedServiceImpl|Timer-0] ***** OrderedServiceImpl implementers: 14:14:21 policy-apex-pdp | [] 14:14:21 policy-apex-pdp | [2024-04-09T14:12:26.214+00:00|INFO|network|Timer-0] [OUT|KAFKA|policy-pdp-pap] 14:14:21 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"a613696f-9b67-4851-908a-282ce03d5805","timestampMs":1712671946198,"name":"apex-87d34be7-6039-47df-ad80-62271f3f875b","pdpGroup":"defaultGroup"} 14:14:21 policy-apex-pdp | [2024-04-09T14:12:26.339+00:00|INFO|ServiceManager|main] service manager starting Rest Server 14:14:21 policy-apex-pdp | [2024-04-09T14:12:26.339+00:00|INFO|ServiceManager|main] service manager starting 14:14:21 policy-apex-pdp | [2024-04-09T14:12:26.340+00:00|INFO|ServiceManager|main] service manager starting REST RestServerParameters 14:14:21 policy-apex-pdp | [2024-04-09T14:12:26.340+00:00|INFO|JettyServletServer|main] JettyJerseyServer [Jerseyservlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-3591009c==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@634b550e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-64c2b546==org.glassfish.jersey.servlet.ServletContainer@2d1bc350{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=Server@5ebd56e9{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@63f34b70{/,null,STOPPED}, connector=RestServerParameters@5d25e6bb{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=null, servlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-3591009c==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@634b550e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-64c2b546==org.glassfish.jersey.servlet.ServletContainer@2d1bc350{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 14:14:21 policy-apex-pdp | [2024-04-09T14:12:26.349+00:00|INFO|ServiceManager|main] service manager started 14:14:21 policy-apex-pdp | [2024-04-09T14:12:26.349+00:00|INFO|ServiceManager|main] service manager started 14:14:21 policy-apex-pdp | [2024-04-09T14:12:26.349+00:00|INFO|ApexStarterMain|main] Started policy-apex-pdp service successfully. 14:14:21 kafka | [2024-04-09 14:11:59,396] DEBUG [Controller id=1] Register BrokerModifications handler for Set(1) (kafka.controller.KafkaController) 14:14:21 kafka | [2024-04-09 14:11:59,406] DEBUG [Channel manager on controller 1]: Controller 1 trying to connect to broker 1 (kafka.controller.ControllerChannelManager) 14:14:21 kafka | [2024-04-09 14:11:59,416] INFO [Controller id=1] Currently active brokers in the cluster: Set(1) (kafka.controller.KafkaController) 14:14:21 kafka | [2024-04-09 14:11:59,416] INFO [RequestSendThread controllerId=1] Starting (kafka.controller.RequestSendThread) 14:14:21 simulator | 2024-04-09 14:11:47,184 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-45e37a7e==org.glassfish.jersey.servlet.ServletContainer@95a48755{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@62452cc9{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@6941827a{/,null,AVAILABLE}, connector=SDNC simulator@3e10dc6{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=Thread[SDNC simulator-6668,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-45e37a7e==org.glassfish.jersey.servlet.ServletContainer@95a48755{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4923 ms. 14:14:21 simulator | 2024-04-09 14:11:47,185 INFO org.onap.policy.models.simulators starting SO simulator 14:14:21 simulator | 2024-04-09 14:11:47,188 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-7516e4e5==org.glassfish.jersey.servlet.ServletContainer@74ca99b0{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@488eb7f2{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@5e81e5ac{/,null,STOPPED}, connector=SO simulator@5bc9ba1d{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-7516e4e5==org.glassfish.jersey.servlet.ServletContainer@74ca99b0{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 14:14:21 simulator | 2024-04-09 14:11:47,188 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-7516e4e5==org.glassfish.jersey.servlet.ServletContainer@74ca99b0{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@488eb7f2{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@5e81e5ac{/,null,STOPPED}, connector=SO simulator@5bc9ba1d{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-7516e4e5==org.glassfish.jersey.servlet.ServletContainer@74ca99b0{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 14:14:21 simulator | 2024-04-09 14:11:47,190 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-7516e4e5==org.glassfish.jersey.servlet.ServletContainer@74ca99b0{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@488eb7f2{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@5e81e5ac{/,null,STOPPED}, connector=SO simulator@5bc9ba1d{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=Thread[SO simulator-6669,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-7516e4e5==org.glassfish.jersey.servlet.ServletContainer@74ca99b0{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 14:14:21 simulator | 2024-04-09 14:11:47,190 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.10+7-alpine-r0 14:14:21 simulator | 2024-04-09 14:11:47,197 INFO Session workerName=node0 14:14:21 simulator | 2024-04-09 14:11:47,252 INFO Using GSON for REST calls 14:14:21 simulator | 2024-04-09 14:11:47,264 INFO Started o.e.j.s.ServletContextHandler@5e81e5ac{/,null,AVAILABLE} 14:14:21 simulator | 2024-04-09 14:11:47,266 INFO Started SO simulator@5bc9ba1d{HTTP/1.1, (http/1.1)}{0.0.0.0:6669} 14:14:21 simulator | 2024-04-09 14:11:47,266 INFO Started Server@488eb7f2{STARTING}[11.0.20,sto=0] @1710ms 14:14:21 simulator | 2024-04-09 14:11:47,266 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-7516e4e5==org.glassfish.jersey.servlet.ServletContainer@74ca99b0{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@488eb7f2{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@5e81e5ac{/,null,AVAILABLE}, connector=SO simulator@5bc9ba1d{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=Thread[SO simulator-6669,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-7516e4e5==org.glassfish.jersey.servlet.ServletContainer@74ca99b0{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4923 ms. 14:14:21 simulator | 2024-04-09 14:11:47,267 INFO org.onap.policy.models.simulators starting VFC simulator 14:14:21 simulator | 2024-04-09 14:11:47,269 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-6f0b0a5e==org.glassfish.jersey.servlet.ServletContainer@2d9a8171{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@6035b93b{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@320de594{/,null,STOPPED}, connector=VFC simulator@3fa2213{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-6f0b0a5e==org.glassfish.jersey.servlet.ServletContainer@2d9a8171{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 14:14:21 simulator | 2024-04-09 14:11:47,269 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-6f0b0a5e==org.glassfish.jersey.servlet.ServletContainer@2d9a8171{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@6035b93b{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@320de594{/,null,STOPPED}, connector=VFC simulator@3fa2213{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-6f0b0a5e==org.glassfish.jersey.servlet.ServletContainer@2d9a8171{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 14:14:21 simulator | 2024-04-09 14:11:47,277 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-6f0b0a5e==org.glassfish.jersey.servlet.ServletContainer@2d9a8171{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@6035b93b{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@320de594{/,null,STOPPED}, connector=VFC simulator@3fa2213{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=Thread[VFC simulator-6670,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-6f0b0a5e==org.glassfish.jersey.servlet.ServletContainer@2d9a8171{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 14:14:21 simulator | 2024-04-09 14:11:47,279 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.10+7-alpine-r0 14:14:21 simulator | 2024-04-09 14:11:47,282 INFO Session workerName=node0 14:14:21 simulator | 2024-04-09 14:11:47,322 INFO Using GSON for REST calls 14:14:21 simulator | 2024-04-09 14:11:47,330 INFO Started o.e.j.s.ServletContextHandler@320de594{/,null,AVAILABLE} 14:14:21 simulator | 2024-04-09 14:11:47,331 INFO Started VFC simulator@3fa2213{HTTP/1.1, (http/1.1)}{0.0.0.0:6670} 14:14:21 simulator | 2024-04-09 14:11:47,332 INFO Started Server@6035b93b{STARTING}[11.0.20,sto=0] @1776ms 14:14:21 simulator | 2024-04-09 14:11:47,332 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-6f0b0a5e==org.glassfish.jersey.servlet.ServletContainer@2d9a8171{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@6035b93b{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@320de594{/,null,AVAILABLE}, connector=VFC simulator@3fa2213{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=Thread[VFC simulator-6670,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-6f0b0a5e==org.glassfish.jersey.servlet.ServletContainer@2d9a8171{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4938 ms. 14:14:21 simulator | 2024-04-09 14:11:47,333 INFO org.onap.policy.models.simulators started 14:14:21 policy-api | [2024-04-09T14:12:09.594+00:00|INFO|ApiDatabaseInitializer|main] Multi-versioned Service Template [onap.policies.Monitoring, onap.policies.monitoring.tcagen2] 14:14:21 policy-api | [2024-04-09T14:12:10.815+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning 14:14:21 policy-api | [2024-04-09T14:12:11.036+00:00|INFO|DefaultSecurityFilterChain|main] Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@5c1348c6, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@4f3eddc0, org.springframework.security.web.context.SecurityContextHolderFilter@69cf9acb, org.springframework.security.web.header.HeaderWriterFilter@62c4ad40, org.springframework.security.web.authentication.logout.LogoutFilter@dcaa0e8, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@3341ba8e, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@5f160f9c, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@234a08ea, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@729f8c5d, org.springframework.security.web.access.ExceptionTranslationFilter@4567dcbc, org.springframework.security.web.access.intercept.AuthorizationFilter@543d242e] 14:14:21 policy-api | [2024-04-09T14:12:11.840+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoint(s) beneath base path '' 14:14:21 policy-api | [2024-04-09T14:12:11.965+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] 14:14:21 policy-api | [2024-04-09T14:12:11.995+00:00|INFO|TomcatWebServer|main] Tomcat started on port(s): 6969 (http) with context path '/policy/api/v1' 14:14:21 policy-api | [2024-04-09T14:12:12.014+00:00|INFO|PolicyApiApplication|main] Started PolicyApiApplication in 10.763 seconds (process running for 11.424) 14:14:21 policy-api | [2024-04-09T14:12:28.402+00:00|INFO|[/policy/api/v1]|http-nio-6969-exec-2] Initializing Spring DispatcherServlet 'dispatcherServlet' 14:14:21 policy-api | [2024-04-09T14:12:28.402+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Initializing Servlet 'dispatcherServlet' 14:14:21 policy-api | [2024-04-09T14:12:28.403+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Completed initialization in 1 ms 14:14:21 policy-api | [2024-04-09T14:12:28.673+00:00|INFO|OrderedServiceImpl|http-nio-6969-exec-2] ***** OrderedServiceImpl implementers: 14:14:21 policy-api | [] 14:14:21 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaparameter (VALUE VARCHAR(255) NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPARAMETER (parentLocalName, localName, parentKeyVersion, parentKeyName)) 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | > upgrade 0670-toscapolicies.sql 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicies (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICIES (name, version)) 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | > upgrade 0680-toscapolicies_toscapolicy.sql 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicies_toscapolicy (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPOLICIES_TOSCAPOLICY (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | > upgrade 0690-toscapolicy.sql 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicy (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCAPOLICY (name, version)) 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | > upgrade 0700-toscapolicytype.sql 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICYTYPE (name, version)) 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | > upgrade 0710-toscapolicytypes.sql 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICYTYPES (name, version)) 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | > upgrade 0720-toscapolicytypes_toscapolicytype.sql 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytypes_toscapolicytype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPOLICYTYPES_TOSCAPOLICYTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | > upgrade 0730-toscaproperty.sql 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaproperty (DEFAULTVALUE VARCHAR(255) DEFAULT NULL, `DESCRIPTION` VARCHAR(255) DEFAULT NULL, ENTRYSCHEMA LONGBLOB DEFAULT NULL, REQUIRED BOOLEAN DEFAULT 0, STATUS INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PRIMARY KEY PK_TOSCAPROPERTY (parentLocalName, localName, parentKeyVersion, parentKeyName)) 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | > upgrade 0740-toscarelationshiptype.sql 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPE (name, version)) 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | > upgrade 0750-toscarelationshiptypes.sql 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPES (name, version)) 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | > upgrade 0760-toscarelationshiptypes_toscarelationshiptype.sql 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-apex-pdp | [2024-04-09T14:12:26.350+00:00|INFO|JettyServletServer|RestServerParameters-6969] JettyJerseyServer [Jerseyservlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-3591009c==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@634b550e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-64c2b546==org.glassfish.jersey.servlet.ServletContainer@2d1bc350{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=Server@5ebd56e9{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@63f34b70{/,null,STOPPED}, connector=RestServerParameters@5d25e6bb{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=Thread[RestServerParameters-6969,5,main], servlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-3591009c==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@634b550e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-64c2b546==org.glassfish.jersey.servlet.ServletContainer@2d1bc350{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 14:14:21 policy-apex-pdp | [2024-04-09T14:12:26.491+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5bf355d1-b191-4690-8ff2-dd6842394381-2, groupId=5bf355d1-b191-4690-8ff2-dd6842394381] Cluster ID: TupwFhGQQjGmvCIddVeH4w 14:14:21 policy-apex-pdp | [2024-04-09T14:12:26.491+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: TupwFhGQQjGmvCIddVeH4w 14:14:21 policy-apex-pdp | [2024-04-09T14:12:26.492+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 2 with epoch 0 14:14:21 policy-apex-pdp | [2024-04-09T14:12:26.492+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5bf355d1-b191-4690-8ff2-dd6842394381-2, groupId=5bf355d1-b191-4690-8ff2-dd6842394381] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) 14:14:21 policy-apex-pdp | [2024-04-09T14:12:26.499+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5bf355d1-b191-4690-8ff2-dd6842394381-2, groupId=5bf355d1-b191-4690-8ff2-dd6842394381] (Re-)joining group 14:14:21 policy-apex-pdp | [2024-04-09T14:12:26.532+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5bf355d1-b191-4690-8ff2-dd6842394381-2, groupId=5bf355d1-b191-4690-8ff2-dd6842394381] Request joining group due to: need to re-join with the given member-id: consumer-5bf355d1-b191-4690-8ff2-dd6842394381-2-780b758d-7817-467f-b505-47072bd7ea3f 14:14:21 policy-apex-pdp | [2024-04-09T14:12:26.532+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5bf355d1-b191-4690-8ff2-dd6842394381-2, groupId=5bf355d1-b191-4690-8ff2-dd6842394381] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) 14:14:21 policy-apex-pdp | [2024-04-09T14:12:26.532+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5bf355d1-b191-4690-8ff2-dd6842394381-2, groupId=5bf355d1-b191-4690-8ff2-dd6842394381] (Re-)joining group 14:14:21 policy-apex-pdp | [2024-04-09T14:12:26.966+00:00|INFO|YamlMessageBodyHandler|RestServerParameters-6969] Accepting YAML for REST calls 14:14:21 policy-apex-pdp | [2024-04-09T14:12:26.966+00:00|INFO|GsonMessageBodyHandler|RestServerParameters-6969] Using GSON for REST calls 14:14:21 policy-apex-pdp | [2024-04-09T14:12:29.535+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5bf355d1-b191-4690-8ff2-dd6842394381-2, groupId=5bf355d1-b191-4690-8ff2-dd6842394381] Successfully joined group with generation Generation{generationId=1, memberId='consumer-5bf355d1-b191-4690-8ff2-dd6842394381-2-780b758d-7817-467f-b505-47072bd7ea3f', protocol='range'} 14:14:21 policy-apex-pdp | [2024-04-09T14:12:29.543+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5bf355d1-b191-4690-8ff2-dd6842394381-2, groupId=5bf355d1-b191-4690-8ff2-dd6842394381] Finished assignment for group at generation 1: {consumer-5bf355d1-b191-4690-8ff2-dd6842394381-2-780b758d-7817-467f-b505-47072bd7ea3f=Assignment(partitions=[policy-pdp-pap-0])} 14:14:21 policy-apex-pdp | [2024-04-09T14:12:29.550+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5bf355d1-b191-4690-8ff2-dd6842394381-2, groupId=5bf355d1-b191-4690-8ff2-dd6842394381] Successfully synced group in generation Generation{generationId=1, memberId='consumer-5bf355d1-b191-4690-8ff2-dd6842394381-2-780b758d-7817-467f-b505-47072bd7ea3f', protocol='range'} 14:14:21 policy-apex-pdp | [2024-04-09T14:12:29.550+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5bf355d1-b191-4690-8ff2-dd6842394381-2, groupId=5bf355d1-b191-4690-8ff2-dd6842394381] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) 14:14:21 policy-apex-pdp | [2024-04-09T14:12:29.552+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5bf355d1-b191-4690-8ff2-dd6842394381-2, groupId=5bf355d1-b191-4690-8ff2-dd6842394381] Adding newly assigned partitions: policy-pdp-pap-0 14:14:21 policy-apex-pdp | [2024-04-09T14:12:29.558+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5bf355d1-b191-4690-8ff2-dd6842394381-2, groupId=5bf355d1-b191-4690-8ff2-dd6842394381] Found no committed offset for partition policy-pdp-pap-0 14:14:21 policy-apex-pdp | [2024-04-09T14:12:29.567+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-5bf355d1-b191-4690-8ff2-dd6842394381-2, groupId=5bf355d1-b191-4690-8ff2-dd6842394381] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. 14:14:21 policy-apex-pdp | [2024-04-09T14:12:46.197+00:00|INFO|network|Timer-0] [OUT|KAFKA|policy-pdp-pap] 14:14:21 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"f71a83e0-8991-48b5-bf16-0f80efc2e25f","timestampMs":1712671966197,"name":"apex-87d34be7-6039-47df-ad80-62271f3f875b","pdpGroup":"defaultGroup"} 14:14:21 policy-apex-pdp | [2024-04-09T14:12:46.220+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 14:14:21 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"f71a83e0-8991-48b5-bf16-0f80efc2e25f","timestampMs":1712671966197,"name":"apex-87d34be7-6039-47df-ad80-62271f3f875b","pdpGroup":"defaultGroup"} 14:14:21 policy-apex-pdp | [2024-04-09T14:12:46.223+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 14:14:21 policy-apex-pdp | [2024-04-09T14:12:46.371+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 14:14:21 policy-apex-pdp | {"source":"pap-d567b5c7-abc8-4867-b3e8-f75d8faeecf1","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"f35c2eaa-9447-4409-bc81-28e3583921e3","timestampMs":1712671966320,"name":"apex-87d34be7-6039-47df-ad80-62271f3f875b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 14:14:21 policy-apex-pdp | [2024-04-09T14:12:46.381+00:00|WARN|Registry|KAFKA-source-policy-pdp-pap] replacing previously registered: object:pdp/status/publisher 14:14:21 policy-apex-pdp | [2024-04-09T14:12:46.382+00:00|INFO|network|Timer-1] [OUT|KAFKA|policy-pdp-pap] 14:14:21 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"58bba064-d794-42f0-bfa3-6b19bdabadb3","timestampMs":1712671966382,"name":"apex-87d34be7-6039-47df-ad80-62271f3f875b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 14:14:21 policy-apex-pdp | [2024-04-09T14:12:46.383+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] 14:14:21 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"f35c2eaa-9447-4409-bc81-28e3583921e3","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"20e7f5d3-bd9e-4c55-ba54-348ec7aba681","timestampMs":1712671966382,"name":"apex-87d34be7-6039-47df-ad80-62271f3f875b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 14:14:21 policy-apex-pdp | [2024-04-09T14:12:46.391+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 14:14:21 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"58bba064-d794-42f0-bfa3-6b19bdabadb3","timestampMs":1712671966382,"name":"apex-87d34be7-6039-47df-ad80-62271f3f875b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 14:14:21 policy-apex-pdp | [2024-04-09T14:12:46.391+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 14:14:21 policy-apex-pdp | [2024-04-09T14:12:46.391+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.116259596Z level=info msg="Executing migration" id="create test_data table" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.117110942Z level=info msg="Migration successfully executed" id="create test_data table" duration=851.316µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.122040072Z level=info msg="Executing migration" id="create dashboard_version table v1" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.122922448Z level=info msg="Migration successfully executed" id="create dashboard_version table v1" duration=882.036µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.125893623Z level=info msg="Executing migration" id="add index dashboard_version.dashboard_id" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.12681224Z level=info msg="Migration successfully executed" id="add index dashboard_version.dashboard_id" duration=918.507µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.129777384Z level=info msg="Executing migration" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.130675731Z level=info msg="Migration successfully executed" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" duration=897.856µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.135111932Z level=info msg="Executing migration" id="Set dashboard version to 1 where 0" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.135303076Z level=info msg="Migration successfully executed" id="Set dashboard version to 1 where 0" duration=190.943µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.13829338Z level=info msg="Executing migration" id="save existing dashboard data in dashboard_version table v1" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.138697178Z level=info msg="Migration successfully executed" id="save existing dashboard data in dashboard_version table v1" duration=402.978µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.142469257Z level=info msg="Executing migration" id="alter dashboard_version.data to mediumtext v1" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.142555339Z level=info msg="Migration successfully executed" id="alter dashboard_version.data to mediumtext v1" duration=86.832µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.146576822Z level=info msg="Executing migration" id="create team table" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.147408888Z level=info msg="Migration successfully executed" id="create team table" duration=833.026µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.150327711Z level=info msg="Executing migration" id="add index team.org_id" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.151574224Z level=info msg="Migration successfully executed" id="add index team.org_id" duration=1.245273ms 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.154508438Z level=info msg="Executing migration" id="add unique index team_org_id_name" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.155560737Z level=info msg="Migration successfully executed" id="add unique index team_org_id_name" duration=1.051639ms 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.159239385Z level=info msg="Executing migration" id="Add column uid in team" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.164039293Z level=info msg="Migration successfully executed" id="Add column uid in team" duration=4.797558ms 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.167208051Z level=info msg="Executing migration" id="Update uid column values in team" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.167423945Z level=info msg="Migration successfully executed" id="Update uid column values in team" duration=216.414µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.171269786Z level=info msg="Executing migration" id="Add unique index team_org_id_uid" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.172287144Z level=info msg="Migration successfully executed" id="Add unique index team_org_id_uid" duration=1.017308ms 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.175668187Z level=info msg="Executing migration" id="create team member table" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.176465091Z level=info msg="Migration successfully executed" id="create team member table" duration=796.945µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.180281911Z level=info msg="Executing migration" id="add index team_member.org_id" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.181272189Z level=info msg="Migration successfully executed" id="add index team_member.org_id" duration=990.218µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.184090791Z level=info msg="Executing migration" id="add unique index team_member_org_id_team_id_user_id" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.185088969Z level=info msg="Migration successfully executed" id="add unique index team_member_org_id_team_id_user_id" duration=997.938µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.187954962Z level=info msg="Executing migration" id="add index team_member.team_id" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.188990941Z level=info msg="Migration successfully executed" id="add index team_member.team_id" duration=1.033349ms 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.192879783Z level=info msg="Executing migration" id="Add column email to team table" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.197554628Z level=info msg="Migration successfully executed" id="Add column email to team table" duration=4.674766ms 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.200563884Z level=info msg="Executing migration" id="Add column external to team_member table" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.205187998Z level=info msg="Migration successfully executed" id="Add column external to team_member table" duration=4.623674ms 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.231995953Z level=info msg="Executing migration" id="Add column permission to team_member table" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.237890931Z level=info msg="Migration successfully executed" id="Add column permission to team_member table" duration=5.891288ms 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.241907695Z level=info msg="Executing migration" id="create dashboard acl table" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.242918124Z level=info msg="Migration successfully executed" id="create dashboard acl table" duration=1.006349ms 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.245834508Z level=info msg="Executing migration" id="add index dashboard_acl_dashboard_id" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.246837626Z level=info msg="Migration successfully executed" id="add index dashboard_acl_dashboard_id" duration=1.003499ms 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.250737508Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_user_id" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.251832888Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_user_id" duration=1.09469ms 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.256491474Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_team_id" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.257479262Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_team_id" duration=987.188µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.260514828Z level=info msg="Executing migration" id="add index dashboard_acl_user_id" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.261627279Z level=info msg="Migration successfully executed" id="add index dashboard_acl_user_id" duration=1.111711ms 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.266169043Z level=info msg="Executing migration" id="add index dashboard_acl_team_id" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.267616989Z level=info msg="Migration successfully executed" id="add index dashboard_acl_team_id" duration=1.447156ms 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.270884889Z level=info msg="Executing migration" id="add index dashboard_acl_org_id_role" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.271838417Z level=info msg="Migration successfully executed" id="add index dashboard_acl_org_id_role" duration=954.208µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.275384352Z level=info msg="Executing migration" id="add index dashboard_permission" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.276398951Z level=info msg="Migration successfully executed" id="add index dashboard_permission" duration=1.011889ms 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.280328084Z level=info msg="Executing migration" id="save default acl rules in dashboard_acl table" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.280875984Z level=info msg="Migration successfully executed" id="save default acl rules in dashboard_acl table" duration=543.74µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.2833945Z level=info msg="Executing migration" id="delete acl rules for deleted dashboards and folders" 14:14:21 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"f35c2eaa-9447-4409-bc81-28e3583921e3","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"20e7f5d3-bd9e-4c55-ba54-348ec7aba681","timestampMs":1712671966382,"name":"apex-87d34be7-6039-47df-ad80-62271f3f875b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 14:14:21 policy-apex-pdp | [2024-04-09T14:12:46.391+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 14:14:21 policy-apex-pdp | [2024-04-09T14:12:46.412+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 14:14:21 policy-apex-pdp | {"source":"pap-d567b5c7-abc8-4867-b3e8-f75d8faeecf1","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"cb095d6b-1806-48f9-af91-a9c5f08d2e3b","timestampMs":1712671966320,"name":"apex-87d34be7-6039-47df-ad80-62271f3f875b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 14:14:21 policy-apex-pdp | [2024-04-09T14:12:46.414+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] 14:14:21 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"cb095d6b-1806-48f9-af91-a9c5f08d2e3b","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"d74c0d11-396a-45ee-aca8-c28faceef757","timestampMs":1712671966414,"name":"apex-87d34be7-6039-47df-ad80-62271f3f875b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 14:14:21 policy-apex-pdp | [2024-04-09T14:12:46.422+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 14:14:21 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"cb095d6b-1806-48f9-af91-a9c5f08d2e3b","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"d74c0d11-396a-45ee-aca8-c28faceef757","timestampMs":1712671966414,"name":"apex-87d34be7-6039-47df-ad80-62271f3f875b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 14:14:21 policy-apex-pdp | [2024-04-09T14:12:46.423+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 14:14:21 policy-apex-pdp | [2024-04-09T14:12:46.464+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 14:14:21 policy-apex-pdp | {"source":"pap-d567b5c7-abc8-4867-b3e8-f75d8faeecf1","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"6626c05b-9878-4bec-8cb9-fdf1ff33442a","timestampMs":1712671966430,"name":"apex-87d34be7-6039-47df-ad80-62271f3f875b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 14:14:21 policy-apex-pdp | [2024-04-09T14:12:46.466+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] 14:14:21 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"6626c05b-9878-4bec-8cb9-fdf1ff33442a","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"b2ae2d05-b707-414a-b0f7-ae1aec005c8b","timestampMs":1712671966465,"name":"apex-87d34be7-6039-47df-ad80-62271f3f875b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 14:14:21 policy-apex-pdp | [2024-04-09T14:12:46.479+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 14:14:21 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"6626c05b-9878-4bec-8cb9-fdf1ff33442a","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"b2ae2d05-b707-414a-b0f7-ae1aec005c8b","timestampMs":1712671966465,"name":"apex-87d34be7-6039-47df-ad80-62271f3f875b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 14:14:21 policy-apex-pdp | [2024-04-09T14:12:46.480+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 14:14:21 policy-apex-pdp | [2024-04-09T14:12:56.154+00:00|INFO|RequestLog|qtp1068445309-33] 172.17.0.5 - policyadmin [09/Apr/2024:14:12:56 +0000] "GET /metrics HTTP/1.1" 200 10649 "-" "Prometheus/2.51.1" 14:14:21 policy-apex-pdp | [2024-04-09T14:13:56.080+00:00|INFO|RequestLog|qtp1068445309-28] 172.17.0.5 - policyadmin [09/Apr/2024:14:13:56 +0000] "GET /metrics HTTP/1.1" 200 10654 "-" "Prometheus/2.51.1" 14:14:21 kafka | [2024-04-09 14:11:59,416] INFO [Controller id=1] Currently shutting brokers in the cluster: HashSet() (kafka.controller.KafkaController) 14:14:21 kafka | [2024-04-09 14:11:59,417] INFO [Controller id=1] Current list of topics in the cluster: HashSet() (kafka.controller.KafkaController) 14:14:21 kafka | [2024-04-09 14:11:59,417] INFO [Controller id=1] Fetching topic deletions in progress (kafka.controller.KafkaController) 14:14:21 kafka | [2024-04-09 14:11:59,420] INFO [Controller id=1] List of topics to be deleted: (kafka.controller.KafkaController) 14:14:21 kafka | [2024-04-09 14:11:59,421] INFO [Controller id=1] List of topics ineligible for deletion: (kafka.controller.KafkaController) 14:14:21 kafka | [2024-04-09 14:11:59,421] INFO [Controller id=1] Initializing topic deletion manager (kafka.controller.KafkaController) 14:14:21 kafka | [2024-04-09 14:11:59,422] INFO [Topic Deletion Manager 1] Initializing manager with initial deletions: Set(), initial ineligible deletions: HashSet() (kafka.controller.TopicDeletionManager) 14:14:21 kafka | [2024-04-09 14:11:59,423] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread) 14:14:21 kafka | [2024-04-09 14:11:59,424] INFO [Controller id=1] Sending update metadata request (kafka.controller.KafkaController) 14:14:21 kafka | [2024-04-09 14:11:59,429] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 0 partitions (state.change.logger) 14:14:21 kafka | [2024-04-09 14:11:59,435] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Enabling request processing. (kafka.network.SocketServer) 14:14:21 kafka | [2024-04-09 14:11:59,439] INFO [ReplicaStateMachine controllerId=1] Initializing replica state (kafka.controller.ZkReplicaStateMachine) 14:14:21 kafka | [2024-04-09 14:11:59,439] INFO [ReplicaStateMachine controllerId=1] Triggering online replica state changes (kafka.controller.ZkReplicaStateMachine) 14:14:21 kafka | [2024-04-09 14:11:59,441] INFO Awaiting socket connections on 0.0.0.0:29092. (kafka.network.DataPlaneAcceptor) 14:14:21 kafka | [2024-04-09 14:11:59,444] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.DataPlaneAcceptor) 14:14:21 kafka | [2024-04-09 14:11:59,444] INFO [ReplicaStateMachine controllerId=1] Triggering offline replica state changes (kafka.controller.ZkReplicaStateMachine) 14:14:21 kafka | [2024-04-09 14:11:59,444] DEBUG [ReplicaStateMachine controllerId=1] Started replica state machine with initial state -> HashMap() (kafka.controller.ZkReplicaStateMachine) 14:14:21 kafka | [2024-04-09 14:11:59,445] INFO [PartitionStateMachine controllerId=1] Initializing partition state (kafka.controller.ZkPartitionStateMachine) 14:14:21 kafka | [2024-04-09 14:11:59,445] INFO [PartitionStateMachine controllerId=1] Triggering online partition state changes (kafka.controller.ZkPartitionStateMachine) 14:14:21 kafka | [2024-04-09 14:11:59,447] DEBUG [PartitionStateMachine controllerId=1] Started partition state machine with initial state -> HashMap() (kafka.controller.ZkPartitionStateMachine) 14:14:21 kafka | [2024-04-09 14:11:59,452] INFO [Controller id=1] Ready to serve as the new controller with epoch 1 (kafka.controller.KafkaController) 14:14:21 kafka | [2024-04-09 14:11:59,454] INFO [Controller id=1, targetBrokerId=1] Node 1 disconnected. (org.apache.kafka.clients.NetworkClient) 14:14:21 kafka | [2024-04-09 14:11:59,461] WARN [Controller id=1, targetBrokerId=1] Connection to node 1 (kafka/172.17.0.9:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient) 14:14:21 kafka | [2024-04-09 14:11:59,466] INFO Kafka version: 7.6.0-ccs (org.apache.kafka.common.utils.AppInfoParser) 14:14:21 kafka | [2024-04-09 14:11:59,466] WARN [RequestSendThread controllerId=1] Controller 1's connection to broker kafka:9092 (id: 1 rack: null) was unsuccessful (kafka.controller.RequestSendThread) 14:14:21 kafka | java.io.IOException: Connection to kafka:9092 (id: 1 rack: null) failed. 14:14:21 kafka | at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:70) 14:14:21 kafka | at kafka.controller.RequestSendThread.brokerReady(ControllerChannelManager.scala:298) 14:14:21 kafka | at kafka.controller.RequestSendThread.doWork(ControllerChannelManager.scala:251) 14:14:21 kafka | at org.apache.kafka.server.util.ShutdownableThread.run(ShutdownableThread.java:130) 14:14:21 kafka | [2024-04-09 14:11:59,468] INFO [Controller id=1, targetBrokerId=1] Client requested connection close from node 1 (org.apache.kafka.clients.NetworkClient) 14:14:21 kafka | [2024-04-09 14:11:59,466] INFO Kafka commitId: 1991cb733c81d6791626f88253a042b2ec835ab8 (org.apache.kafka.common.utils.AppInfoParser) 14:14:21 kafka | [2024-04-09 14:11:59,468] INFO Kafka startTimeMs: 1712671919454 (org.apache.kafka.common.utils.AppInfoParser) 14:14:21 kafka | [2024-04-09 14:11:59,469] INFO [Controller id=1] Partitions undergoing preferred replica election: (kafka.controller.KafkaController) 14:14:21 kafka | [2024-04-09 14:11:59,469] INFO [Controller id=1] Partitions that completed preferred replica election: (kafka.controller.KafkaController) 14:14:21 kafka | [2024-04-09 14:11:59,470] INFO [Controller id=1] Skipping preferred replica election for partitions due to topic deletion: (kafka.controller.KafkaController) 14:14:21 kafka | [2024-04-09 14:11:59,471] INFO [Controller id=1] Resuming preferred replica election for partitions: (kafka.controller.KafkaController) 14:14:21 kafka | [2024-04-09 14:11:59,471] INFO [KafkaServer id=1] started (kafka.server.KafkaServer) 14:14:21 kafka | [2024-04-09 14:11:59,472] INFO [Controller id=1] Starting replica leader election (PREFERRED) for partitions triggered by ZkTriggered (kafka.controller.KafkaController) 14:14:21 kafka | [2024-04-09 14:11:59,483] INFO [Controller id=1] Starting the controller scheduler (kafka.controller.KafkaController) 14:14:21 kafka | [2024-04-09 14:11:59,572] INFO [RequestSendThread controllerId=1] Controller 1 connected to kafka:9092 (id: 1 rack: null) for sending state change requests (kafka.controller.RequestSendThread) 14:14:21 kafka | [2024-04-09 14:11:59,641] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 0 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 14:14:21 kafka | [2024-04-09 14:11:59,654] INFO [zk-broker-1-to-controller-forwarding-channel-manager]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) 14:14:21 kafka | [2024-04-09 14:11:59,677] INFO [zk-broker-1-to-controller-alter-partition-channel-manager]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) 14:14:21 kafka | [2024-04-09 14:12:04,484] INFO [Controller id=1] Processing automatic preferred replica leader election (kafka.controller.KafkaController) 14:14:21 kafka | [2024-04-09 14:12:04,484] TRACE [Controller id=1] Checking need to trigger auto leader balancing (kafka.controller.KafkaController) 14:14:21 kafka | [2024-04-09 14:12:24,800] DEBUG [Controller id=1] There is no producerId block yet (Zk path version 0), creating the first block (kafka.controller.KafkaController) 14:14:21 kafka | [2024-04-09 14:12:24,803] INFO Creating topic policy-pdp-pap with configuration {} and initial partition assignment HashMap(0 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) 14:14:21 kafka | [2024-04-09 14:12:24,805] INFO Creating topic __consumer_offsets with configuration {compression.type=producer, cleanup.policy=compact, segment.bytes=104857600} and initial partition assignment HashMap(0 -> ArrayBuffer(1), 1 -> ArrayBuffer(1), 2 -> ArrayBuffer(1), 3 -> ArrayBuffer(1), 4 -> ArrayBuffer(1), 5 -> ArrayBuffer(1), 6 -> ArrayBuffer(1), 7 -> ArrayBuffer(1), 8 -> ArrayBuffer(1), 9 -> ArrayBuffer(1), 10 -> ArrayBuffer(1), 11 -> ArrayBuffer(1), 12 -> ArrayBuffer(1), 13 -> ArrayBuffer(1), 14 -> ArrayBuffer(1), 15 -> ArrayBuffer(1), 16 -> ArrayBuffer(1), 17 -> ArrayBuffer(1), 18 -> ArrayBuffer(1), 19 -> ArrayBuffer(1), 20 -> ArrayBuffer(1), 21 -> ArrayBuffer(1), 22 -> ArrayBuffer(1), 23 -> ArrayBuffer(1), 24 -> ArrayBuffer(1), 25 -> ArrayBuffer(1), 26 -> ArrayBuffer(1), 27 -> ArrayBuffer(1), 28 -> ArrayBuffer(1), 29 -> ArrayBuffer(1), 30 -> ArrayBuffer(1), 31 -> ArrayBuffer(1), 32 -> ArrayBuffer(1), 33 -> ArrayBuffer(1), 34 -> ArrayBuffer(1), 35 -> ArrayBuffer(1), 36 -> ArrayBuffer(1), 37 -> ArrayBuffer(1), 38 -> ArrayBuffer(1), 39 -> ArrayBuffer(1), 40 -> ArrayBuffer(1), 41 -> ArrayBuffer(1), 42 -> ArrayBuffer(1), 43 -> ArrayBuffer(1), 44 -> ArrayBuffer(1), 45 -> ArrayBuffer(1), 46 -> ArrayBuffer(1), 47 -> ArrayBuffer(1), 48 -> ArrayBuffer(1), 49 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) 14:14:21 kafka | [2024-04-09 14:12:24,807] INFO [Controller id=1] Acquired new producerId block ProducerIdsBlock(assignedBrokerId=1, firstProducerId=0, size=1000) by writing to Zk with path version 1 (kafka.controller.KafkaController) 14:14:21 kafka | [2024-04-09 14:12:24,832] INFO [Controller id=1] New topics: [Set(policy-pdp-pap, __consumer_offsets)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(policy-pdp-pap,Some(ITmYpZ6rSK-iF5o_1J2T3Q),Map(policy-pdp-pap-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))), TopicIdReplicaAssignment(__consumer_offsets,Some(JIxyITR5QGSmI5P2pGX22A),HashMap(__consumer_offsets-22 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-30 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-25 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-35 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-37 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-38 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-13 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-8 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-21 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-4 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-27 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-7 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-9 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-46 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-41 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-33 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-23 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-49 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-47 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-16 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-28 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-31 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-36 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-42 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-3 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-18 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-15 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-24 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-17 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-48 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-19 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-11 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-2 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-43 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-6 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-14 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-20 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-44 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-39 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-12 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-45 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-1 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-5 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-26 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-29 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-34 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-10 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-32 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-40 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) 14:14:21 kafka | [2024-04-09 14:12:24,834] INFO [Controller id=1] New partition creation callback for __consumer_offsets-22,__consumer_offsets-30,__consumer_offsets-25,__consumer_offsets-35,__consumer_offsets-38,__consumer_offsets-13,__consumer_offsets-8,__consumer_offsets-21,__consumer_offsets-4,__consumer_offsets-27,__consumer_offsets-7,__consumer_offsets-9,__consumer_offsets-46,__consumer_offsets-41,__consumer_offsets-33,__consumer_offsets-23,__consumer_offsets-49,__consumer_offsets-47,__consumer_offsets-16,__consumer_offsets-28,__consumer_offsets-31,__consumer_offsets-36,__consumer_offsets-42,__consumer_offsets-3,__consumer_offsets-18,__consumer_offsets-37,policy-pdp-pap-0,__consumer_offsets-15,__consumer_offsets-24,__consumer_offsets-17,__consumer_offsets-48,__consumer_offsets-19,__consumer_offsets-11,__consumer_offsets-2,__consumer_offsets-43,__consumer_offsets-6,__consumer_offsets-14,__consumer_offsets-20,__consumer_offsets-0,__consumer_offsets-44,__consumer_offsets-39,__consumer_offsets-12,__consumer_offsets-45,__consumer_offsets-1,__consumer_offsets-5,__consumer_offsets-26,__consumer_offsets-29,__consumer_offsets-34,__consumer_offsets-10,__consumer_offsets-32,__consumer_offsets-40 (kafka.controller.KafkaController) 14:14:21 kafka | [2024-04-09 14:12:24,836] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:14:21 kafka | [2024-04-09 14:12:24,836] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:14:21 kafka | [2024-04-09 14:12:24,836] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:14:21 kafka | [2024-04-09 14:12:24,836] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:14:21 kafka | [2024-04-09 14:12:24,836] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:14:21 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptypes_toscarelationshiptype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPES_TOSCARELATIONSHIPTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | > upgrade 0770-toscarequirement.sql 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirement (CAPABILITY VARCHAR(255) NULL, `DESCRIPTION` VARCHAR(255) NULL, NODE VARCHAR(255) NULL, RELATIONSHIP VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCAREQUIREMENT (name, version)) 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | > upgrade 0780-toscarequirements.sql 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirements (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAREQUIREMENTS (name, version)) 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | > upgrade 0790-toscarequirements_toscarequirement.sql 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirements_toscarequirement (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAREQUIREMENTS_TOSCAREQUIREMENT (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | > upgrade 0800-toscaservicetemplate.sql 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaservicetemplate (`DESCRIPTION` VARCHAR(255) NULL, TOSCADEFINITIONSVERSION VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, capabilityTypesVersion VARCHAR(20) NULL, capabilityTypesName VARCHAR(120) NULL, dataTypesName VARCHAR(120) NULL, dataTypesVersion VARCHAR(20) NULL, nodeTypesVersion VARCHAR(20) NULL, nodeTypesName VARCHAR(120) NULL, policyTypesName VARCHAR(120) NULL, policyTypesVersion VARCHAR(20) NULL, relationshipTypesVersion VARCHAR(20) NULL, relationshipTypesName VARCHAR(120) NULL, topologyTemplateLocalName VARCHAR(120) NULL, topologyTemplateParentKeyName VARCHAR(120) NULL, topologyTemplateParentKeyVersion VARCHAR(15) NULL, topologyTemplateParentLocalName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCASERVICETEMPLATE (name, version)) 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | > upgrade 0810-toscatopologytemplate.sql 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscatopologytemplate (`description` VARCHAR(255) NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, nodeTemplatessVersion VARCHAR(20) NULL, nodeTemplatesName VARCHAR(120) NULL, policyVersion VARCHAR(20) NULL, policyName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCATOPOLOGYTEMPLATE (parentLocalName, localName, parentKeyVersion, parentKeyName)) 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | 14:14:21 kafka | [2024-04-09 14:12:24,836] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:14:21 kafka | [2024-04-09 14:12:24,836] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:14:21 kafka | [2024-04-09 14:12:24,836] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:14:21 kafka | [2024-04-09 14:12:24,836] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:14:21 kafka | [2024-04-09 14:12:24,836] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:14:21 kafka | [2024-04-09 14:12:24,836] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:14:21 kafka | [2024-04-09 14:12:24,836] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:14:21 kafka | [2024-04-09 14:12:24,836] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:14:21 kafka | [2024-04-09 14:12:24,836] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:14:21 kafka | [2024-04-09 14:12:24,836] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:14:21 kafka | [2024-04-09 14:12:24,836] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:14:21 kafka | [2024-04-09 14:12:24,836] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:14:21 kafka | [2024-04-09 14:12:24,836] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:14:21 kafka | [2024-04-09 14:12:24,837] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:14:21 kafka | [2024-04-09 14:12:24,837] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:14:21 kafka | [2024-04-09 14:12:24,837] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:14:21 kafka | [2024-04-09 14:12:24,837] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:14:21 kafka | [2024-04-09 14:12:24,837] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:14:21 kafka | [2024-04-09 14:12:24,837] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:14:21 kafka | [2024-04-09 14:12:24,837] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:14:21 kafka | [2024-04-09 14:12:24,837] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:14:21 kafka | [2024-04-09 14:12:24,837] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:14:21 kafka | [2024-04-09 14:12:24,837] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:14:21 kafka | [2024-04-09 14:12:24,837] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:14:21 kafka | [2024-04-09 14:12:24,837] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:14:21 kafka | [2024-04-09 14:12:24,837] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:14:21 kafka | [2024-04-09 14:12:24,837] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:14:21 kafka | [2024-04-09 14:12:24,837] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:14:21 kafka | [2024-04-09 14:12:24,837] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:14:21 kafka | [2024-04-09 14:12:24,837] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:14:21 kafka | [2024-04-09 14:12:24,837] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:14:21 kafka | [2024-04-09 14:12:24,837] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:14:21 kafka | [2024-04-09 14:12:24,837] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:14:21 kafka | [2024-04-09 14:12:24,837] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:14:21 kafka | [2024-04-09 14:12:24,837] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:14:21 kafka | [2024-04-09 14:12:24,837] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:14:21 kafka | [2024-04-09 14:12:24,837] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:14:21 kafka | [2024-04-09 14:12:24,837] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:14:21 kafka | [2024-04-09 14:12:24,838] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:14:21 kafka | [2024-04-09 14:12:24,838] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.283684385Z level=info msg="Migration successfully executed" id="delete acl rules for deleted dashboards and folders" duration=287.545µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.287550777Z level=info msg="Executing migration" id="create tag table" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.288513555Z level=info msg="Migration successfully executed" id="create tag table" duration=963.907µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.291371297Z level=info msg="Executing migration" id="add index tag.key_value" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.292335125Z level=info msg="Migration successfully executed" id="add index tag.key_value" duration=963.218µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.295167017Z level=info msg="Executing migration" id="create login attempt table" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.295911911Z level=info msg="Migration successfully executed" id="create login attempt table" duration=744.394µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.299060029Z level=info msg="Executing migration" id="add index login_attempt.username" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.299969376Z level=info msg="Migration successfully executed" id="add index login_attempt.username" duration=908.047µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.303678484Z level=info msg="Executing migration" id="drop index IDX_login_attempt_username - v1" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.30456272Z level=info msg="Migration successfully executed" id="drop index IDX_login_attempt_username - v1" duration=883.886µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.307398043Z level=info msg="Executing migration" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.322627933Z level=info msg="Migration successfully executed" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" duration=15.228181ms 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.325567188Z level=info msg="Executing migration" id="create login_attempt v2" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.3262129Z level=info msg="Migration successfully executed" id="create login_attempt v2" duration=646.382µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.329662463Z level=info msg="Executing migration" id="create index IDX_login_attempt_username - v2" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.330353196Z level=info msg="Migration successfully executed" id="create index IDX_login_attempt_username - v2" duration=690.113µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.333118027Z level=info msg="Executing migration" id="copy login_attempt v1 to v2" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.333584375Z level=info msg="Migration successfully executed" id="copy login_attempt v1 to v2" duration=464.698µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.366974861Z level=info msg="Executing migration" id="drop login_attempt_tmp_qwerty" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.368047081Z level=info msg="Migration successfully executed" id="drop login_attempt_tmp_qwerty" duration=1.07518ms 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.372563924Z level=info msg="Executing migration" id="create user auth table" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.373782227Z level=info msg="Migration successfully executed" id="create user auth table" duration=1.217313ms 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.376895874Z level=info msg="Executing migration" id="create index IDX_user_auth_auth_module_auth_id - v1" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.378377571Z level=info msg="Migration successfully executed" id="create index IDX_user_auth_auth_module_auth_id - v1" duration=1.480847ms 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.38156122Z level=info msg="Executing migration" id="alter user_auth.auth_id to length 190" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.381630961Z level=info msg="Migration successfully executed" id="alter user_auth.auth_id to length 190" duration=70.591µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.385714207Z level=info msg="Executing migration" id="Add OAuth access token to user_auth" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.390928493Z level=info msg="Migration successfully executed" id="Add OAuth access token to user_auth" duration=5.213465ms 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.393748295Z level=info msg="Executing migration" id="Add OAuth refresh token to user_auth" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.398761077Z level=info msg="Migration successfully executed" id="Add OAuth refresh token to user_auth" duration=5.012232ms 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.401529378Z level=info msg="Executing migration" id="Add OAuth token type to user_auth" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.406558181Z level=info msg="Migration successfully executed" id="Add OAuth token type to user_auth" duration=5.028293ms 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.411741546Z level=info msg="Executing migration" id="Add OAuth expiry to user_auth" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.4167971Z level=info msg="Migration successfully executed" id="Add OAuth expiry to user_auth" duration=5.055324ms 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.419885367Z level=info msg="Executing migration" id="Add index to user_id column in user_auth" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.420837184Z level=info msg="Migration successfully executed" id="Add index to user_id column in user_auth" duration=951.147µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.423625056Z level=info msg="Executing migration" id="Add OAuth ID token to user_auth" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.428786561Z level=info msg="Migration successfully executed" id="Add OAuth ID token to user_auth" duration=5.158975ms 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.432679713Z level=info msg="Executing migration" id="create server_lock table" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.433458517Z level=info msg="Migration successfully executed" id="create server_lock table" duration=778.405µs 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.436468352Z level=info msg="Executing migration" id="add index server_lock.operation_uid" 14:14:21 kafka | [2024-04-09 14:12:24,838] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:14:21 prometheus | ts=2024-04-09T14:11:49.767Z caller=main.go:573 level=info msg="No time or size retention was set so using the default time retention" duration=15d 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.437348458Z level=info msg="Migration successfully executed" id="add index server_lock.operation_uid" duration=879.746µs 14:14:21 policy-pap | Waiting for mariadb port 3306... 14:14:21 prometheus | ts=2024-04-09T14:11:49.767Z caller=main.go:617 level=info msg="Starting Prometheus Server" mode=server version="(version=2.51.1, branch=HEAD, revision=855b5ac4b80956874eb1790a04c92327f2f99e38)" 14:14:21 policy-db-migrator | > upgrade 0820-toscatrigger.sql 14:14:21 kafka | [2024-04-09 14:12:24,838] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.440448856Z level=info msg="Executing migration" id="create user auth token table" 14:14:21 policy-pap | mariadb (172.17.0.3:3306) open 14:14:21 prometheus | ts=2024-04-09T14:11:49.767Z caller=main.go:622 level=info build_context="(go=go1.22.1, platform=linux/amd64, user=root@d3785d7783f2, date=20240328-09:27:30, tags=netgo,builtinassets,stringlabels)" 14:14:21 policy-db-migrator | -------------- 14:14:21 kafka | [2024-04-09 14:12:24,838] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.441284521Z level=info msg="Migration successfully executed" id="create user auth token table" duration=833.765µs 14:14:21 policy-pap | Waiting for kafka port 9092... 14:14:21 prometheus | ts=2024-04-09T14:11:49.767Z caller=main.go:623 level=info host_details="(Linux 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 prometheus (none))" 14:14:21 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscatrigger (ACTION VARCHAR(255) NULL, toscaCondition LONGBLOB DEFAULT NULL, toscaConstraint LONGBLOB DEFAULT NULL, `DESCRIPTION` VARCHAR(255) NULL, EVALUATIONS INT DEFAULT NULL, EVENTTYPE VARCHAR(255) NULL, METHOD VARCHAR(255) NULL, `PERIOD` LONGBLOB DEFAULT NULL, SCHEDULE LONGBLOB DEFAULT NULL, TARGETFILTER LONGBLOB DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_TOSCATRIGGER (parentLocalName, localName, parentKeyVersion, parentKeyName)) 14:14:21 kafka | [2024-04-09 14:12:24,838] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.446327954Z level=info msg="Executing migration" id="add unique index user_auth_token.auth_token" 14:14:21 policy-pap | kafka (172.17.0.9:9092) open 14:14:21 prometheus | ts=2024-04-09T14:11:49.767Z caller=main.go:624 level=info fd_limits="(soft=1048576, hard=1048576)" 14:14:21 policy-db-migrator | -------------- 14:14:21 kafka | [2024-04-09 14:12:24,838] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.447222291Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.auth_token" duration=894.147µs 14:14:21 policy-pap | Waiting for api port 6969... 14:14:21 prometheus | ts=2024-04-09T14:11:49.767Z caller=main.go:625 level=info vm_limits="(soft=unlimited, hard=unlimited)" 14:14:21 policy-db-migrator | 14:14:21 kafka | [2024-04-09 14:12:24,838] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.450137634Z level=info msg="Executing migration" id="add unique index user_auth_token.prev_auth_token" 14:14:21 policy-pap | api (172.17.0.7:6969) open 14:14:21 prometheus | ts=2024-04-09T14:11:49.769Z caller=web.go:568 level=info component=web msg="Start listening for connections" address=0.0.0.0:9090 14:14:21 kafka | [2024-04-09 14:12:24,838] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 14:14:21 policy-pap | Policy pap config file: /opt/app/policy/pap/etc/papParameters.yaml 14:14:21 prometheus | ts=2024-04-09T14:11:49.771Z caller=main.go:1129 level=info msg="Starting TSDB ..." 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.451057631Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.prev_auth_token" duration=918.487µs 14:14:21 policy-db-migrator | 14:14:21 kafka | [2024-04-09 14:12:24,843] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NonExistentReplica to NewReplica (state.change.logger) 14:14:21 policy-pap | PDP group configuration file: /opt/app/policy/pap/etc/mounted/groups.json 14:14:21 prometheus | ts=2024-04-09T14:11:49.773Z caller=tls_config.go:313 level=info component=web msg="Listening on" address=[::]:9090 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.454042836Z level=info msg="Executing migration" id="add index user_auth_token.user_id" 14:14:21 policy-db-migrator | > upgrade 0830-FK_ToscaNodeTemplate_capabilitiesName.sql 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-pap | 14:14:21 prometheus | ts=2024-04-09T14:11:49.773Z caller=tls_config.go:316 level=info component=web msg="TLS is disabled." http2=false address=[::]:9090 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.455174567Z level=info msg="Migration successfully executed" id="add index user_auth_token.user_id" duration=1.129651ms 14:14:21 policy-db-migrator | CREATE INDEX FK_ToscaNodeTemplate_capabilitiesName ON toscanodetemplate(capabilitiesName, capabilitiesVersion) 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-pap | . ____ _ __ _ _ 14:14:21 prometheus | ts=2024-04-09T14:11:49.778Z caller=head.go:616 level=info component=tsdb msg="Replaying on-disk memory mappable chunks if any" 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.459093729Z level=info msg="Executing migration" id="Add revoked_at to the user auth token" 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | 14:14:21 policy-pap | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ 14:14:21 prometheus | ts=2024-04-09T14:11:49.778Z caller=head.go:698 level=info component=tsdb msg="On-disk memory mappable chunks replay completed" duration=3.081µs 14:14:21 policy-db-migrator | > upgrade 0840-FK_ToscaNodeTemplate_requirementsName.sql 14:14:21 policy-db-migrator | -------------- 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.468207157Z level=info msg="Migration successfully executed" id="Add revoked_at to the user auth token" duration=9.113548ms 14:14:21 policy-pap | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ 14:14:21 prometheus | ts=2024-04-09T14:11:49.778Z caller=head.go:706 level=info component=tsdb msg="Replaying WAL, this may take a while" 14:14:21 policy-db-migrator | CREATE INDEX FK_ToscaNodeTemplate_requirementsName ON toscanodetemplate(requirementsName, requirementsVersion) 14:14:21 policy-db-migrator | -------------- 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.471534739Z level=info msg="Executing migration" id="add index user_auth_token.revoked_at" 14:14:21 policy-pap | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) 14:14:21 prometheus | ts=2024-04-09T14:11:49.779Z caller=head.go:778 level=info component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.472490326Z level=info msg="Migration successfully executed" id="add index user_auth_token.revoked_at" duration=955.177µs 14:14:21 policy-pap | ' |____| .__|_| |_|_| |_\__, | / / / / 14:14:21 prometheus | ts=2024-04-09T14:11:49.779Z caller=head.go:815 level=info component=tsdb msg="WAL replay completed" checkpoint_replay_duration=193.564µs wal_replay_duration=420.558µs wbl_replay_duration=210ns total_replay_duration=665.933µs 14:14:21 policy-db-migrator | > upgrade 0850-FK_ToscaNodeType_requirementsName.sql 14:14:21 policy-db-migrator | -------------- 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.475506112Z level=info msg="Executing migration" id="create cache_data table" 14:14:21 policy-pap | =========|_|==============|___/=/_/_/_/ 14:14:21 prometheus | ts=2024-04-09T14:11:49.783Z caller=main.go:1150 level=info fs_type=EXT4_SUPER_MAGIC 14:14:21 policy-db-migrator | CREATE INDEX FK_ToscaNodeType_requirementsName ON toscanodetype(requirementsName, requirementsVersion) 14:14:21 policy-db-migrator | -------------- 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.476380058Z level=info msg="Migration successfully executed" id="create cache_data table" duration=873.806µs 14:14:21 policy-pap | :: Spring Boot :: (v3.1.8) 14:14:21 prometheus | ts=2024-04-09T14:11:49.783Z caller=main.go:1153 level=info msg="TSDB started" 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.480126527Z level=info msg="Executing migration" id="add unique index cache_data.cache_key" 14:14:21 policy-pap | 14:14:21 prometheus | ts=2024-04-09T14:11:49.783Z caller=main.go:1335 level=info msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml 14:14:21 policy-db-migrator | > upgrade 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql 14:14:21 policy-db-migrator | -------------- 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.481021214Z level=info msg="Migration successfully executed" id="add unique index cache_data.cache_key" duration=896.207µs 14:14:21 policy-pap | [2024-04-09T14:12:14.862+00:00|INFO|PolicyPapApplication|main] Starting PolicyPapApplication using Java 17.0.10 with PID 32 (/app/pap.jar started by policy in /opt/app/policy/pap/bin) 14:14:21 prometheus | ts=2024-04-09T14:11:49.785Z caller=main.go:1372 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=1.263803ms db_storage=1.67µs remote_storage=1.71µs web_handler=760ns query_engine=960ns scrape=327.796µs scrape_sd=152.603µs notify=123.292µs notify_sd=11.15µs rules=2.2µs tracing=5.19µs 14:14:21 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_capabilityTypesName ON toscaservicetemplate(capabilityTypesName, capabilityTypesVersion) 14:14:21 policy-db-migrator | -------------- 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.484131811Z level=info msg="Executing migration" id="create short_url table v1" 14:14:21 policy-pap | [2024-04-09T14:12:14.863+00:00|INFO|PolicyPapApplication|main] No active profile set, falling back to 1 default profile: "default" 14:14:21 prometheus | ts=2024-04-09T14:11:49.785Z caller=main.go:1114 level=info msg="Server is ready to receive web requests." 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.485010177Z level=info msg="Migration successfully executed" id="create short_url table v1" duration=878.286µs 14:14:21 policy-pap | [2024-04-09T14:12:16.710+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. 14:14:21 prometheus | ts=2024-04-09T14:11:49.785Z caller=manager.go:163 level=info component="rule manager" msg="Starting rule manager..." 14:14:21 policy-db-migrator | > upgrade 0870-FK_ToscaServiceTemplate_dataTypesName.sql 14:14:21 policy-db-migrator | -------------- 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.487994622Z level=info msg="Executing migration" id="add index short_url.org_id-uid" 14:14:21 policy-pap | [2024-04-09T14:12:16.830+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 108 ms. Found 7 JPA repository interfaces. 14:14:21 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_dataTypesName ON toscaservicetemplate(dataTypesName, dataTypesVersion) 14:14:21 kafka | [2024-04-09 14:12:24,843] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NonExistentReplica to NewReplica (state.change.logger) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.48894979Z level=info msg="Migration successfully executed" id="add index short_url.org_id-uid" duration=954.928µs 14:14:21 policy-pap | [2024-04-09T14:12:17.212+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.exception.ServiceExceptionHandler 14:14:21 kafka | [2024-04-09 14:12:24,843] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NonExistentReplica to NewReplica (state.change.logger) 14:14:21 kafka | [2024-04-09 14:12:24,843] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NonExistentReplica to NewReplica (state.change.logger) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.49327668Z level=info msg="Executing migration" id="alter table short_url alter column created_by type to bigint" 14:14:21 policy-pap | [2024-04-09T14:12:17.212+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.exception.ServiceExceptionHandler 14:14:21 kafka | [2024-04-09 14:12:24,843] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NonExistentReplica to NewReplica (state.change.logger) 14:14:21 kafka | [2024-04-09 14:12:24,843] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NonExistentReplica to NewReplica (state.change.logger) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.493343311Z level=info msg="Migration successfully executed" id="alter table short_url alter column created_by type to bigint" duration=67.382µs 14:14:21 policy-pap | [2024-04-09T14:12:17.990+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port(s): 6969 (http) 14:14:21 kafka | [2024-04-09 14:12:24,843] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NonExistentReplica to NewReplica (state.change.logger) 14:14:21 kafka | [2024-04-09 14:12:24,843] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NonExistentReplica to NewReplica (state.change.logger) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.496222354Z level=info msg="Executing migration" id="delete alert_definition table" 14:14:21 policy-pap | [2024-04-09T14:12:18.001+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] 14:14:21 kafka | [2024-04-09 14:12:24,843] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NonExistentReplica to NewReplica (state.change.logger) 14:14:21 kafka | [2024-04-09 14:12:24,843] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NonExistentReplica to NewReplica (state.change.logger) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.496304905Z level=info msg="Migration successfully executed" id="delete alert_definition table" duration=82.911µs 14:14:21 policy-pap | [2024-04-09T14:12:18.003+00:00|INFO|StandardService|main] Starting service [Tomcat] 14:14:21 kafka | [2024-04-09 14:12:24,843] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NonExistentReplica to NewReplica (state.change.logger) 14:14:21 kafka | [2024-04-09 14:12:24,843] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NonExistentReplica to NewReplica (state.change.logger) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.498874783Z level=info msg="Executing migration" id="recreate alert_definition table" 14:14:21 policy-pap | [2024-04-09T14:12:18.003+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.18] 14:14:21 kafka | [2024-04-09 14:12:24,843] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NonExistentReplica to NewReplica (state.change.logger) 14:14:21 kafka | [2024-04-09 14:12:24,843] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NonExistentReplica to NewReplica (state.change.logger) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.500187167Z level=info msg="Migration successfully executed" id="recreate alert_definition table" duration=1.311694ms 14:14:21 policy-pap | [2024-04-09T14:12:18.112+00:00|INFO|[/policy/pap/v1]|main] Initializing Spring embedded WebApplicationContext 14:14:21 kafka | [2024-04-09 14:12:24,843] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NonExistentReplica to NewReplica (state.change.logger) 14:14:21 kafka | [2024-04-09 14:12:24,843] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NonExistentReplica to NewReplica (state.change.logger) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.504911714Z level=info msg="Executing migration" id="add index in alert_definition on org_id and title columns" 14:14:21 policy-pap | [2024-04-09T14:12:18.112+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 3155 ms 14:14:21 kafka | [2024-04-09 14:12:24,843] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NonExistentReplica to NewReplica (state.change.logger) 14:14:21 kafka | [2024-04-09 14:12:24,843] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NonExistentReplica to NewReplica (state.change.logger) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.505886632Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and title columns" duration=973.168µs 14:14:21 policy-pap | [2024-04-09T14:12:18.552+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] 14:14:21 kafka | [2024-04-09 14:12:24,843] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NonExistentReplica to NewReplica (state.change.logger) 14:14:21 kafka | [2024-04-09 14:12:24,843] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NonExistentReplica to NewReplica (state.change.logger) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.508754275Z level=info msg="Executing migration" id="add index in alert_definition on org_id and uid columns" 14:14:21 policy-pap | [2024-04-09T14:12:18.641+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.3.0.CR1 14:14:21 kafka | [2024-04-09 14:12:24,843] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NonExistentReplica to NewReplica (state.change.logger) 14:14:21 kafka | [2024-04-09 14:12:24,843] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NonExistentReplica to NewReplica (state.change.logger) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.510078089Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and uid columns" duration=1.322534ms 14:14:21 policy-pap | [2024-04-09T14:12:18.644+00:00|INFO|Environment|main] HHH000406: Using bytecode reflection optimizer 14:14:21 kafka | [2024-04-09 14:12:24,843] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NonExistentReplica to NewReplica (state.change.logger) 14:14:21 kafka | [2024-04-09 14:12:24,843] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NonExistentReplica to NewReplica (state.change.logger) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.513740517Z level=info msg="Executing migration" id="alter alert_definition table data column to mediumtext in mysql" 14:14:21 policy-pap | [2024-04-09T14:12:18.684+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled 14:14:21 kafka | [2024-04-09 14:12:24,843] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NonExistentReplica to NewReplica (state.change.logger) 14:14:21 kafka | [2024-04-09 14:12:24,843] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NonExistentReplica to NewReplica (state.change.logger) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.513840489Z level=info msg="Migration successfully executed" id="alter alert_definition table data column to mediumtext in mysql" duration=100.902µs 14:14:21 policy-pap | [2024-04-09T14:12:19.037+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer 14:14:21 kafka | [2024-04-09 14:12:24,843] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NonExistentReplica to NewReplica (state.change.logger) 14:14:21 kafka | [2024-04-09 14:12:24,843] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NonExistentReplica to NewReplica (state.change.logger) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.517391264Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and title columns" 14:14:21 policy-pap | [2024-04-09T14:12:19.056+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... 14:14:21 kafka | [2024-04-09 14:12:24,843] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NonExistentReplica to NewReplica (state.change.logger) 14:14:21 kafka | [2024-04-09 14:12:24,844] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NonExistentReplica to NewReplica (state.change.logger) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.518613607Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and title columns" duration=1.223213ms 14:14:21 policy-pap | [2024-04-09T14:12:19.176+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.mariadb.jdbc.Connection@2def7a7a 14:14:21 kafka | [2024-04-09 14:12:24,844] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NonExistentReplica to NewReplica (state.change.logger) 14:14:21 kafka | [2024-04-09 14:12:24,844] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NonExistentReplica to NewReplica (state.change.logger) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.522472088Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and uid columns" 14:14:21 policy-pap | [2024-04-09T14:12:19.178+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. 14:14:21 kafka | [2024-04-09 14:12:24,844] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NonExistentReplica to NewReplica (state.change.logger) 14:14:21 kafka | [2024-04-09 14:12:24,844] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NonExistentReplica to NewReplica (state.change.logger) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.523374004Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and uid columns" duration=901.296µs 14:14:21 policy-pap | [2024-04-09T14:12:21.136+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) 14:14:21 kafka | [2024-04-09 14:12:24,844] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NonExistentReplica to NewReplica (state.change.logger) 14:14:21 kafka | [2024-04-09 14:12:24,844] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NonExistentReplica to NewReplica (state.change.logger) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.526541763Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and title columns" 14:14:21 policy-pap | [2024-04-09T14:12:21.140+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' 14:14:21 kafka | [2024-04-09 14:12:24,844] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NonExistentReplica to NewReplica (state.change.logger) 14:14:21 kafka | [2024-04-09 14:12:24,844] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NonExistentReplica to NewReplica (state.change.logger) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.527552562Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and title columns" duration=1.010579ms 14:14:21 policy-pap | [2024-04-09T14:12:21.664+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PdpGroupRepository 14:14:21 kafka | [2024-04-09 14:12:24,844] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NonExistentReplica to NewReplica (state.change.logger) 14:14:21 kafka | [2024-04-09 14:12:24,844] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NonExistentReplica to NewReplica (state.change.logger) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.532155026Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and uid columns" 14:14:21 policy-pap | [2024-04-09T14:12:22.093+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PolicyStatusRepository 14:14:21 kafka | [2024-04-09 14:12:24,844] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NonExistentReplica to NewReplica (state.change.logger) 14:14:21 kafka | [2024-04-09 14:12:24,844] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NonExistentReplica to NewReplica (state.change.logger) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.533175205Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and uid columns" duration=1.020099ms 14:14:21 policy-pap | [2024-04-09T14:12:22.203+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PolicyAuditRepository 14:14:21 kafka | [2024-04-09 14:12:24,844] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NonExistentReplica to NewReplica (state.change.logger) 14:14:21 kafka | [2024-04-09 14:12:24,844] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NonExistentReplica to NewReplica (state.change.logger) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.536865483Z level=info msg="Executing migration" id="Add column paused in alert_definition" 14:14:21 policy-pap | [2024-04-09T14:12:22.477+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 14:14:21 kafka | [2024-04-09 14:12:24,844] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NonExistentReplica to NewReplica (state.change.logger) 14:14:21 kafka | [2024-04-09 14:12:24,844] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NonExistentReplica to NewReplica (state.change.logger) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.543803611Z level=info msg="Migration successfully executed" id="Add column paused in alert_definition" duration=6.935418ms 14:14:21 policy-pap | allow.auto.create.topics = true 14:14:21 kafka | [2024-04-09 14:12:24,844] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NonExistentReplica to NewReplica (state.change.logger) 14:14:21 kafka | [2024-04-09 14:12:24,844] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NonExistentReplica to NewReplica (state.change.logger) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.54698783Z level=info msg="Executing migration" id="drop alert_definition table" 14:14:21 policy-pap | auto.commit.interval.ms = 5000 14:14:21 kafka | [2024-04-09 14:12:24,844] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NonExistentReplica to NewReplica (state.change.logger) 14:14:21 kafka | [2024-04-09 14:12:24,844] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NonExistentReplica to NewReplica (state.change.logger) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.547714613Z level=info msg="Migration successfully executed" id="drop alert_definition table" duration=725.713µs 14:14:21 policy-pap | auto.include.jmx.reporter = true 14:14:21 kafka | [2024-04-09 14:12:24,844] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NonExistentReplica to NewReplica (state.change.logger) 14:14:21 kafka | [2024-04-09 14:12:24,844] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.552448111Z level=info msg="Executing migration" id="delete alert_definition_version table" 14:14:21 policy-pap | auto.offset.reset = latest 14:14:21 kafka | [2024-04-09 14:12:24,984] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:14:21 kafka | [2024-04-09 14:12:24,984] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.552514172Z level=info msg="Migration successfully executed" id="delete alert_definition_version table" duration=66.412µs 14:14:21 policy-pap | bootstrap.servers = [kafka:9092] 14:14:21 kafka | [2024-04-09 14:12:24,984] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:14:21 kafka | [2024-04-09 14:12:24,984] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.57683602Z level=info msg="Executing migration" id="recreate alert_definition_version table" 14:14:21 policy-pap | check.crcs = true 14:14:21 kafka | [2024-04-09 14:12:24,984] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:14:21 kafka | [2024-04-09 14:12:24,984] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.577647875Z level=info msg="Migration successfully executed" id="recreate alert_definition_version table" duration=814.495µs 14:14:21 policy-pap | client.dns.lookup = use_all_dns_ips 14:14:21 kafka | [2024-04-09 14:12:24,984] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:14:21 kafka | [2024-04-09 14:12:24,984] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.580409866Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_id and version columns" 14:14:21 policy-pap | client.id = consumer-8886bf5a-38da-4c7c-af7d-ca09814a22ad-1 14:14:21 kafka | [2024-04-09 14:12:24,984] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:14:21 kafka | [2024-04-09 14:12:24,984] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.581137789Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_id and version columns" duration=727.713µs 14:14:21 policy-pap | client.rack = 14:14:21 kafka | [2024-04-09 14:12:24,984] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:14:21 kafka | [2024-04-09 14:12:24,984] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.584727056Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_uid and version columns" 14:14:21 policy-pap | connections.max.idle.ms = 540000 14:14:21 kafka | [2024-04-09 14:12:24,984] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:14:21 kafka | [2024-04-09 14:12:24,985] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.586283314Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_uid and version columns" duration=1.555498ms 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-pap | default.api.timeout.ms = 60000 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.591793776Z level=info msg="Executing migration" id="alter alert_definition_version table data column to mediumtext in mysql" 14:14:21 kafka | [2024-04-09 14:12:24,985] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:14:21 kafka | [2024-04-09 14:12:24,985] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:14:21 policy-pap | enable.auto.commit = true 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.591876017Z level=info msg="Migration successfully executed" id="alter alert_definition_version table data column to mediumtext in mysql" duration=83.811µs 14:14:21 kafka | [2024-04-09 14:12:24,985] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:14:21 kafka | [2024-04-09 14:12:24,985] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:14:21 policy-pap | exclude.internal.topics = true 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.594882743Z level=info msg="Executing migration" id="drop alert_definition_version table" 14:14:21 kafka | [2024-04-09 14:12:24,985] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:14:21 kafka | [2024-04-09 14:12:24,985] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:14:21 policy-pap | fetch.max.bytes = 52428800 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.595848111Z level=info msg="Migration successfully executed" id="drop alert_definition_version table" duration=964.348µs 14:14:21 kafka | [2024-04-09 14:12:24,985] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:14:21 kafka | [2024-04-09 14:12:24,985] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:14:21 policy-pap | fetch.max.wait.ms = 500 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.598744234Z level=info msg="Executing migration" id="create alert_instance table" 14:14:21 kafka | [2024-04-09 14:12:24,985] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:14:21 policy-db-migrator | 14:14:21 policy-pap | fetch.min.bytes = 1 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.599822714Z level=info msg="Migration successfully executed" id="create alert_instance table" duration=1.07729ms 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | > upgrade 0880-FK_ToscaServiceTemplate_nodeTypesName.sql 14:14:21 policy-pap | group.id = 8886bf5a-38da-4c7c-af7d-ca09814a22ad 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_nodeTypesName ON toscaservicetemplate(nodeTypesName, nodeTypesVersion) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.60450559Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" 14:14:21 policy-pap | group.instance.id = null 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.605720063Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" duration=1.214793ms 14:14:21 policy-pap | heartbeat.interval.ms = 3000 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | > upgrade 0890-FK_ToscaServiceTemplate_policyTypesName.sql 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.608748939Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, current_state columns" 14:14:21 policy-pap | interceptor.classes = [] 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_policyTypesName ON toscaservicetemplate(policyTypesName, policyTypesVersion) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.610235696Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, current_state columns" duration=1.484777ms 14:14:21 policy-pap | internal.leave.group.on.close = true 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.613394354Z level=info msg="Executing migration" id="add column current_state_end to alert_instance" 14:14:21 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | > upgrade 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.62186616Z level=info msg="Migration successfully executed" id="add column current_state_end to alert_instance" duration=8.471216ms 14:14:21 policy-pap | isolation.level = read_uncommitted 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_relationshipTypesName ON toscaservicetemplate(relationshipTypesName, relationshipTypesVersion) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.627997723Z level=info msg="Executing migration" id="remove index def_org_id, def_uid, current_state on alert_instance" 14:14:21 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.629108364Z level=info msg="Migration successfully executed" id="remove index def_org_id, def_uid, current_state on alert_instance" duration=1.112701ms 14:14:21 policy-pap | max.partition.fetch.bytes = 1048576 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | > upgrade 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql 14:14:21 policy-pap | max.poll.interval.ms = 300000 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | CREATE INDEX FK_ToscaTopologyTemplate_nodeTemplatesName ON toscatopologytemplate(nodeTemplatesName, nodeTemplatessVersion) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.63326092Z level=info msg="Executing migration" id="remove index def_org_id, current_state on alert_instance" 14:14:21 policy-pap | max.poll.records = 500 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.635336998Z level=info msg="Migration successfully executed" id="remove index def_org_id, current_state on alert_instance" duration=2.075458ms 14:14:21 policy-pap | metadata.max.age.ms = 300000 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | > upgrade 0920-FK_ToscaTopologyTemplate_policyName.sql 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.644514008Z level=info msg="Executing migration" id="rename def_org_id to rule_org_id in alert_instance" 14:14:21 policy-pap | metric.reporters = [] 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | CREATE INDEX FK_ToscaTopologyTemplate_policyName ON toscatopologytemplate(policyName, policyVersion) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.673306009Z level=info msg="Migration successfully executed" id="rename def_org_id to rule_org_id in alert_instance" duration=28.78951ms 14:14:21 policy-pap | metrics.num.samples = 2 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.678382222Z level=info msg="Executing migration" id="rename def_uid to rule_uid in alert_instance" 14:14:21 policy-pap | metrics.recording.level = INFO 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | > upgrade 0940-PdpPolicyStatus_PdpGroup.sql 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.715474286Z level=info msg="Migration successfully executed" id="rename def_uid to rule_uid in alert_instance" duration=37.073774ms 14:14:21 policy-pap | metrics.sample.window.ms = 30000 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | CREATE INDEX PdpPolicyStatus_PdpGroup ON pdppolicystatus(PDPGROUP) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.736536484Z level=info msg="Executing migration" id="add index rule_org_id, rule_uid, current_state on alert_instance" 14:14:21 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.73793911Z level=info msg="Migration successfully executed" id="add index rule_org_id, rule_uid, current_state on alert_instance" duration=1.403496ms 14:14:21 policy-pap | receive.buffer.bytes = 65536 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | > upgrade 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.743747978Z level=info msg="Executing migration" id="add index rule_org_id, current_state on alert_instance" 14:14:21 policy-pap | reconnect.backoff.max.ms = 1000 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | CREATE INDEX TscaServiceTemplatetopologyTemplateParentLocalName ON toscaservicetemplate(topologyTemplateParentLocalName, topologyTemplateLocalName, topologyTemplateParentKeyVersion, topologyTemplateParentKeyName) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.744781706Z level=info msg="Migration successfully executed" id="add index rule_org_id, current_state on alert_instance" duration=1.033799ms 14:14:21 policy-pap | reconnect.backoff.ms = 50 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.750364249Z level=info msg="Executing migration" id="add current_reason column related to current_state" 14:14:21 policy-pap | request.timeout.ms = 30000 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | > upgrade 0960-FK_ToscaNodeTemplate_capabilitiesName.sql 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.756432251Z level=info msg="Migration successfully executed" id="add current_reason column related to current_state" duration=6.067932ms 14:14:21 policy-pap | retry.backoff.ms = 100 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | ALTER TABLE toscanodetemplate ADD CONSTRAINT FK_ToscaNodeTemplate_capabilitiesName FOREIGN KEY (capabilitiesName, capabilitiesVersion) REFERENCES toscacapabilityassignments (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.801790908Z level=info msg="Executing migration" id="add result_fingerprint column to alert_instance" 14:14:21 policy-pap | sasl.client.callback.handler.class = null 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.809364177Z level=info msg="Migration successfully executed" id="add result_fingerprint column to alert_instance" duration=7.576849ms 14:14:21 policy-pap | sasl.jaas.config = null 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | > upgrade 0970-FK_ToscaNodeTemplate_requirementsName.sql 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.814701665Z level=info msg="Executing migration" id="create alert_rule table" 14:14:21 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | ALTER TABLE toscanodetemplate ADD CONSTRAINT FK_ToscaNodeTemplate_requirementsName FOREIGN KEY (requirementsName, requirementsVersion) REFERENCES toscarequirements (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.815710884Z level=info msg="Migration successfully executed" id="create alert_rule table" duration=1.008639ms 14:14:21 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.818464625Z level=info msg="Executing migration" id="add index in alert_rule on org_id and title columns" 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | > upgrade 0980-FK_ToscaNodeType_requirementsName.sql 14:14:21 policy-pap | sasl.kerberos.service.name = null 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.819338651Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and title columns" duration=874.286µs 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | ALTER TABLE toscanodetype ADD CONSTRAINT FK_ToscaNodeType_requirementsName FOREIGN KEY (requirementsName, requirementsVersion) REFERENCES toscarequirements (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 14:14:21 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.823269784Z level=info msg="Executing migration" id="add index in alert_rule on org_id and uid columns" 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | 14:14:21 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.824032438Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and uid columns" duration=763.234µs 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | > upgrade 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql 14:14:21 policy-pap | sasl.login.callback.handler.class = null 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.827460651Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_capabilityTypesName FOREIGN KEY (capabilityTypesName, capabilityTypesVersion) REFERENCES toscacapabilitytypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 14:14:21 policy-pap | sasl.login.class = null 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.828729024Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" duration=1.268273ms 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-db-migrator | 14:14:21 policy-pap | sasl.login.connect.timeout.ms = null 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.841899117Z level=info msg="Executing migration" id="alter alert_rule table data column to mediumtext in mysql" 14:14:21 policy-db-migrator | 14:14:21 policy-db-migrator | > upgrade 1000-FK_ToscaServiceTemplate_dataTypesName.sql 14:14:21 policy-pap | sasl.login.read.timeout.ms = null 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.842004479Z level=info msg="Migration successfully executed" id="alter alert_rule table data column to mediumtext in mysql" duration=108.242µs 14:14:21 kafka | [2024-04-09 14:12:24,985] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-pap | sasl.login.refresh.buffer.seconds = 300 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.857390233Z level=info msg="Executing migration" id="add column for to alert_rule" 14:14:21 kafka | [2024-04-09 14:12:24,985] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:14:21 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_dataTypesName FOREIGN KEY (dataTypesName, dataTypesVersion) REFERENCES toscadatatypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 14:14:21 policy-pap | sasl.login.refresh.min.period.seconds = 60 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.866208735Z level=info msg="Migration successfully executed" id="add column for to alert_rule" duration=8.819602ms 14:14:21 kafka | [2024-04-09 14:12:24,985] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-pap | sasl.login.refresh.window.factor = 0.8 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.870032516Z level=info msg="Executing migration" id="add column annotations to alert_rule" 14:14:21 kafka | [2024-04-09 14:12:24,985] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:14:21 policy-db-migrator | 14:14:21 policy-pap | sasl.login.refresh.window.jitter = 0.05 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.876737339Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule" duration=6.702643ms 14:14:21 kafka | [2024-04-09 14:12:24,985] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:14:21 policy-db-migrator | 14:14:21 policy-pap | sasl.login.retry.backoff.max.ms = 10000 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.882488085Z level=info msg="Executing migration" id="add column labels to alert_rule" 14:14:21 kafka | [2024-04-09 14:12:24,985] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:14:21 policy-db-migrator | > upgrade 1010-FK_ToscaServiceTemplate_nodeTypesName.sql 14:14:21 policy-pap | sasl.login.retry.backoff.ms = 100 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.889917902Z level=info msg="Migration successfully executed" id="add column labels to alert_rule" duration=7.429107ms 14:14:21 kafka | [2024-04-09 14:12:24,985] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:14:21 policy-db-migrator | -------------- 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.893813544Z level=info msg="Executing migration" id="remove unique index from alert_rule on org_id, title columns" 14:14:21 kafka | [2024-04-09 14:12:24,985] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:14:21 policy-pap | sasl.mechanism = GSSAPI 14:14:21 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_nodeTypesName FOREIGN KEY (nodeTypesName, nodeTypesVersion) REFERENCES toscanodetypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.894632199Z level=info msg="Migration successfully executed" id="remove unique index from alert_rule on org_id, title columns" duration=818.635µs 14:14:21 kafka | [2024-04-09 14:12:24,986] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:14:21 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 14:14:21 policy-db-migrator | -------------- 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.932316473Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespase_uid and title columns" 14:14:21 kafka | [2024-04-09 14:12:24,986] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:14:21 policy-pap | sasl.oauthbearer.expected.audience = null 14:14:21 policy-db-migrator | 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:53.93376284Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespase_uid and title columns" duration=1.453317ms 14:14:21 kafka | [2024-04-09 14:12:24,986] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:14:21 policy-pap | sasl.oauthbearer.expected.issuer = null 14:14:21 policy-db-migrator | 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.012513111Z level=info msg="Executing migration" id="add dashboard_uid column to alert_rule" 14:14:21 kafka | [2024-04-09 14:12:24,986] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:14:21 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 14:14:21 policy-db-migrator | > upgrade 1020-FK_ToscaServiceTemplate_policyTypesName.sql 14:14:21 kafka | [2024-04-09 14:12:24,986] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:14:21 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.02231238Z level=info msg="Migration successfully executed" id="add dashboard_uid column to alert_rule" duration=9.797759ms 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.027098437Z level=info msg="Executing migration" id="add panel_id column to alert_rule" 14:14:21 kafka | [2024-04-09 14:12:24,986] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:14:21 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_policyTypesName FOREIGN KEY (policyTypesName, policyTypesVersion) REFERENCES toscapolicytypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 14:14:21 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.034728656Z level=info msg="Migration successfully executed" id="add panel_id column to alert_rule" duration=7.627669ms 14:14:21 kafka | [2024-04-09 14:12:24,986] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-pap | sasl.oauthbearer.scope.claim.name = scope 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.043144079Z level=info msg="Executing migration" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" 14:14:21 kafka | [2024-04-09 14:12:24,986] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:14:21 policy-db-migrator | 14:14:21 policy-pap | sasl.oauthbearer.sub.claim.name = sub 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.044173808Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" duration=1.030219ms 14:14:21 policy-db-migrator | 14:14:21 kafka | [2024-04-09 14:12:24,986] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:14:21 policy-pap | sasl.oauthbearer.token.endpoint.url = null 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.050435742Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule" 14:14:21 policy-db-migrator | > upgrade 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql 14:14:21 kafka | [2024-04-09 14:12:24,986] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:14:21 policy-pap | security.protocol = PLAINTEXT 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.058217504Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule" duration=7.780912ms 14:14:21 policy-db-migrator | -------------- 14:14:21 kafka | [2024-04-09 14:12:24,986] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:14:21 policy-pap | security.providers = null 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.060900233Z level=info msg="Executing migration" id="add is_paused column to alert_rule table" 14:14:21 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_relationshipTypesName FOREIGN KEY (relationshipTypesName, relationshipTypesVersion) REFERENCES toscarelationshiptypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 14:14:21 kafka | [2024-04-09 14:12:24,986] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:14:21 policy-pap | send.buffer.bytes = 131072 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.065760101Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule table" duration=4.860378ms 14:14:21 policy-db-migrator | -------------- 14:14:21 kafka | [2024-04-09 14:12:24,986] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:14:21 policy-pap | session.timeout.ms = 45000 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.068392409Z level=info msg="Executing migration" id="fix is_paused column for alert_rule table" 14:14:21 policy-db-migrator | 14:14:21 kafka | [2024-04-09 14:12:24,986] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:14:21 policy-pap | socket.connection.setup.timeout.max.ms = 30000 14:14:21 policy-db-migrator | 14:14:21 kafka | [2024-04-09 14:12:24,986] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:14:21 policy-pap | socket.connection.setup.timeout.ms = 10000 14:14:21 kafka | [2024-04-09 14:12:24,986] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.06844137Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule table" duration=49.381µs 14:14:21 policy-db-migrator | > upgrade 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql 14:14:21 policy-pap | ssl.cipher.suites = null 14:14:21 kafka | [2024-04-09 14:12:24,986] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.073302449Z level=info msg="Executing migration" id="create alert_rule_version table" 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 14:14:21 policy-db-migrator | ALTER TABLE toscatopologytemplate ADD CONSTRAINT FK_ToscaTopologyTemplate_nodeTemplatesName FOREIGN KEY (nodeTemplatesName, nodeTemplatessVersion) REFERENCES toscanodetemplates (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 14:14:21 kafka | [2024-04-09 14:12:24,987] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.074133134Z level=info msg="Migration successfully executed" id="create alert_rule_version table" duration=830.325µs 14:14:21 policy-pap | ssl.endpoint.identification.algorithm = https 14:14:21 policy-db-migrator | -------------- 14:14:21 kafka | [2024-04-09 14:12:24,987] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.077557447Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" 14:14:21 policy-pap | ssl.engine.factory.class = null 14:14:21 policy-db-migrator | 14:14:21 kafka | [2024-04-09 14:12:24,987] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 14:14:21 policy-pap | ssl.key.password = null 14:14:21 policy-db-migrator | 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.078617376Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=1.059859ms 14:14:21 kafka | [2024-04-09 14:12:24,993] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-13 (state.change.logger) 14:14:21 policy-pap | ssl.keymanager.algorithm = SunX509 14:14:21 policy-db-migrator | > upgrade 1050-FK_ToscaTopologyTemplate_policyName.sql 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.083448684Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" 14:14:21 kafka | [2024-04-09 14:12:24,993] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-46 (state.change.logger) 14:14:21 policy-pap | ssl.keystore.certificate.chain = null 14:14:21 policy-db-migrator | -------------- 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.084952252Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" duration=1.503237ms 14:14:21 kafka | [2024-04-09 14:12:24,993] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-9 (state.change.logger) 14:14:21 policy-pap | ssl.keystore.key = null 14:14:21 policy-db-migrator | ALTER TABLE toscatopologytemplate ADD CONSTRAINT FK_ToscaTopologyTemplate_policyName FOREIGN KEY (policyName, policyVersion) REFERENCES toscapolicies (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.094396093Z level=info msg="Executing migration" id="alter alert_rule_version table data column to mediumtext in mysql" 14:14:21 kafka | [2024-04-09 14:12:24,993] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-42 (state.change.logger) 14:14:21 policy-pap | ssl.keystore.location = null 14:14:21 policy-db-migrator | -------------- 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.094532186Z level=info msg="Migration successfully executed" id="alter alert_rule_version table data column to mediumtext in mysql" duration=137.603µs 14:14:21 kafka | [2024-04-09 14:12:24,993] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-21 (state.change.logger) 14:14:21 policy-pap | ssl.keystore.password = null 14:14:21 policy-db-migrator | 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.09804834Z level=info msg="Executing migration" id="add column for to alert_rule_version" 14:14:21 kafka | [2024-04-09 14:12:24,994] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-17 (state.change.logger) 14:14:21 policy-pap | ssl.keystore.type = JKS 14:14:21 policy-db-migrator | 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.104961876Z level=info msg="Migration successfully executed" id="add column for to alert_rule_version" duration=6.913516ms 14:14:21 kafka | [2024-04-09 14:12:24,994] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-30 (state.change.logger) 14:14:21 kafka | [2024-04-09 14:12:24,994] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-26 (state.change.logger) 14:14:21 policy-pap | ssl.protocol = TLSv1.3 14:14:21 policy-db-migrator | > upgrade 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql 14:14:21 kafka | [2024-04-09 14:12:24,994] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-5 (state.change.logger) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.111626618Z level=info msg="Executing migration" id="add column annotations to alert_rule_version" 14:14:21 policy-pap | ssl.provider = null 14:14:21 policy-db-migrator | -------------- 14:14:21 kafka | [2024-04-09 14:12:24,994] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-38 (state.change.logger) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.116739271Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule_version" duration=5.113434ms 14:14:21 policy-pap | ssl.secure.random.implementation = null 14:14:21 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT TscaServiceTemplatetopologyTemplateParentLocalName FOREIGN KEY (topologyTemplateParentLocalName, topologyTemplateLocalName, topologyTemplateParentKeyVersion, topologyTemplateParentKeyName) REFERENCES toscatopologytemplate (parentLocalName, localName, parentKeyVersion, parentKeyName) ON UPDATE RESTRICT ON DELETE RESTRICT 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.119930589Z level=info msg="Executing migration" id="add column labels to alert_rule_version" 14:14:21 kafka | [2024-04-09 14:12:24,994] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-1 (state.change.logger) 14:14:21 policy-pap | ssl.trustmanager.algorithm = PKIX 14:14:21 policy-db-migrator | -------------- 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.12605463Z level=info msg="Migration successfully executed" id="add column labels to alert_rule_version" duration=6.123061ms 14:14:21 kafka | [2024-04-09 14:12:24,994] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-34 (state.change.logger) 14:14:21 policy-pap | ssl.truststore.certificates = null 14:14:21 policy-db-migrator | 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.131160033Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule_version" 14:14:21 kafka | [2024-04-09 14:12:24,994] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-16 (state.change.logger) 14:14:21 policy-pap | ssl.truststore.location = null 14:14:21 policy-db-migrator | 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.140652706Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule_version" duration=9.491103ms 14:14:21 kafka | [2024-04-09 14:12:24,994] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-45 (state.change.logger) 14:14:21 policy-pap | ssl.truststore.password = null 14:14:21 policy-db-migrator | > upgrade 0100-pdp.sql 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.145126258Z level=info msg="Executing migration" id="add is_paused column to alert_rule_versions table" 14:14:21 kafka | [2024-04-09 14:12:24,994] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-12 (state.change.logger) 14:14:21 policy-pap | ssl.truststore.type = JKS 14:14:21 policy-db-migrator | -------------- 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.152404441Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule_versions table" duration=7.278673ms 14:14:21 kafka | [2024-04-09 14:12:24,994] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-41 (state.change.logger) 14:14:21 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 14:14:21 policy-db-migrator | ALTER TABLE pdp ADD COLUMN LASTUPDATE datetime NOT NULL DEFAULT CURRENT_TIMESTAMP AFTER HEALTHY 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.156454655Z level=info msg="Executing migration" id="fix is_paused column for alert_rule_version table" 14:14:21 kafka | [2024-04-09 14:12:24,994] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-24 (state.change.logger) 14:14:21 policy-pap | 14:14:21 policy-db-migrator | -------------- 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.156528916Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule_version table" duration=78.711µs 14:14:21 kafka | [2024-04-09 14:12:24,994] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-20 (state.change.logger) 14:14:21 policy-pap | [2024-04-09T14:12:22.634+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 14:14:21 policy-db-migrator | 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.161632489Z level=info msg="Executing migration" id=create_alert_configuration_table 14:14:21 kafka | [2024-04-09 14:12:24,994] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-49 (state.change.logger) 14:14:21 policy-pap | [2024-04-09T14:12:22.635+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 14:14:21 policy-db-migrator | 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.162449674Z level=info msg="Migration successfully executed" id=create_alert_configuration_table duration=817.565µs 14:14:21 kafka | [2024-04-09 14:12:24,994] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-0 (state.change.logger) 14:14:21 policy-pap | [2024-04-09T14:12:22.635+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1712671942633 14:14:21 policy-db-migrator | > upgrade 0110-idx_tsidx1.sql 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.205446397Z level=info msg="Executing migration" id="Add column default in alert_configuration" 14:14:21 kafka | [2024-04-09 14:12:24,994] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-29 (state.change.logger) 14:14:21 policy-pap | [2024-04-09T14:12:22.637+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-8886bf5a-38da-4c7c-af7d-ca09814a22ad-1, groupId=8886bf5a-38da-4c7c-af7d-ca09814a22ad] Subscribed to topic(s): policy-pdp-pap 14:14:21 policy-db-migrator | -------------- 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.214625955Z level=info msg="Migration successfully executed" id="Add column default in alert_configuration" duration=9.184798ms 14:14:21 kafka | [2024-04-09 14:12:24,994] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-25 (state.change.logger) 14:14:21 policy-pap | [2024-04-09T14:12:22.638+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 14:14:21 policy-db-migrator | CREATE INDEX IDX_TSIDX1 ON pdpstatistics(timeStamp, name, version) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.218526146Z level=info msg="Executing migration" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" 14:14:21 kafka | [2024-04-09 14:12:24,994] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-8 (state.change.logger) 14:14:21 policy-pap | allow.auto.create.topics = true 14:14:21 policy-db-migrator | -------------- 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.219006785Z level=info msg="Migration successfully executed" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" duration=481.099µs 14:14:21 kafka | [2024-04-09 14:12:24,995] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-37 (state.change.logger) 14:14:21 policy-pap | auto.commit.interval.ms = 5000 14:14:21 policy-db-migrator | 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.222978607Z level=info msg="Executing migration" id="add column org_id in alert_configuration" 14:14:21 kafka | [2024-04-09 14:12:24,995] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-4 (state.change.logger) 14:14:21 policy-pap | auto.include.jmx.reporter = true 14:14:21 policy-db-migrator | 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.229161099Z level=info msg="Migration successfully executed" id="add column org_id in alert_configuration" duration=6.182302ms 14:14:21 kafka | [2024-04-09 14:12:24,995] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-33 (state.change.logger) 14:14:21 policy-pap | auto.offset.reset = latest 14:14:21 policy-db-migrator | > upgrade 0120-pk_pdpstatistics.sql 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.238600241Z level=info msg="Executing migration" id="add index in alert_configuration table on org_id column" 14:14:21 kafka | [2024-04-09 14:12:24,995] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-15 (state.change.logger) 14:14:21 policy-pap | bootstrap.servers = [kafka:9092] 14:14:21 policy-db-migrator | -------------- 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.239927816Z level=info msg="Migration successfully executed" id="add index in alert_configuration table on org_id column" duration=1.324925ms 14:14:21 kafka | [2024-04-09 14:12:24,995] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-48 (state.change.logger) 14:14:21 policy-pap | check.crcs = true 14:14:21 policy-db-migrator | ALTER TABLE pdpstatistics DROP PRIMARY KEY 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.244376077Z level=info msg="Executing migration" id="add configuration_hash column to alert_configuration" 14:14:21 kafka | [2024-04-09 14:12:24,995] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-11 (state.change.logger) 14:14:21 policy-pap | client.dns.lookup = use_all_dns_ips 14:14:21 policy-db-migrator | -------------- 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.252716719Z level=info msg="Migration successfully executed" id="add configuration_hash column to alert_configuration" duration=8.340292ms 14:14:21 policy-pap | client.id = consumer-policy-pap-2 14:14:21 kafka | [2024-04-09 14:12:24,995] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-44 (state.change.logger) 14:14:21 policy-db-migrator | 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.257072098Z level=info msg="Executing migration" id=create_ngalert_configuration_table 14:14:21 policy-pap | client.rack = 14:14:21 kafka | [2024-04-09 14:12:24,995] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-23 (state.change.logger) 14:14:21 policy-db-migrator | 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.25772058Z level=info msg="Migration successfully executed" id=create_ngalert_configuration_table duration=648.712µs 14:14:21 policy-pap | connections.max.idle.ms = 540000 14:14:21 kafka | [2024-04-09 14:12:24,996] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-19 (state.change.logger) 14:14:21 policy-db-migrator | > upgrade 0130-pdpstatistics.sql 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.261710513Z level=info msg="Executing migration" id="add index in ngalert_configuration on org_id column" 14:14:21 policy-pap | default.api.timeout.ms = 60000 14:14:21 kafka | [2024-04-09 14:12:24,996] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-32 (state.change.logger) 14:14:21 policy-db-migrator | -------------- 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.262539218Z level=info msg="Migration successfully executed" id="add index in ngalert_configuration on org_id column" duration=828.705µs 14:14:21 policy-pap | enable.auto.commit = true 14:14:21 kafka | [2024-04-09 14:12:24,997] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-28 (state.change.logger) 14:14:21 policy-db-migrator | ALTER TABLE pdpstatistics ADD COLUMN POLICYUNDEPLOYCOUNT BIGINT DEFAULT NULL AFTER POLICYEXECUTEDSUCCESSCOUNT, ADD COLUMN POLICYUNDEPLOYFAILCOUNT BIGINT DEFAULT NULL, ADD COLUMN POLICYUNDEPLOYSUCCESSCOUNT BIGINT DEFAULT NULL, ADD COLUMN ID BIGINT NOT NULL 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.265442611Z level=info msg="Executing migration" id="add column send_alerts_to in ngalert_configuration" 14:14:21 policy-pap | exclude.internal.topics = true 14:14:21 kafka | [2024-04-09 14:12:24,997] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-7 (state.change.logger) 14:14:21 policy-db-migrator | -------------- 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.27199055Z level=info msg="Migration successfully executed" id="add column send_alerts_to in ngalert_configuration" duration=6.547699ms 14:14:21 policy-pap | fetch.max.bytes = 52428800 14:14:21 kafka | [2024-04-09 14:12:24,997] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-40 (state.change.logger) 14:14:21 policy-db-migrator | 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.277793196Z level=info msg="Executing migration" id="create provenance_type table" 14:14:21 policy-pap | fetch.max.wait.ms = 500 14:14:21 kafka | [2024-04-09 14:12:24,997] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-3 (state.change.logger) 14:14:21 policy-db-migrator | 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.278628541Z level=info msg="Migration successfully executed" id="create provenance_type table" duration=835.155µs 14:14:21 policy-pap | fetch.min.bytes = 1 14:14:21 kafka | [2024-04-09 14:12:24,997] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-36 (state.change.logger) 14:14:21 policy-db-migrator | > upgrade 0140-pk_pdpstatistics.sql 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.284276174Z level=info msg="Executing migration" id="add index to uniquify (record_key, record_type, org_id) columns" 14:14:21 policy-pap | group.id = policy-pap 14:14:21 kafka | [2024-04-09 14:12:24,997] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-47 (state.change.logger) 14:14:21 policy-db-migrator | -------------- 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.286004316Z level=info msg="Migration successfully executed" id="add index to uniquify (record_key, record_type, org_id) columns" duration=1.727272ms 14:14:21 policy-pap | group.instance.id = null 14:14:21 kafka | [2024-04-09 14:12:24,997] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-14 (state.change.logger) 14:14:21 policy-db-migrator | UPDATE pdpstatistics as p JOIN (SELECT name, version, timeStamp, ROW_NUMBER() OVER (ORDER BY timeStamp ASC) AS row_num FROM pdpstatistics GROUP BY name, version, timeStamp) AS t ON (p.name=t.name AND p.version=t.version AND p.timeStamp = t.timeStamp) SET p.id=t.row_num 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.290663301Z level=info msg="Executing migration" id="create alert_image table" 14:14:21 policy-pap | heartbeat.interval.ms = 3000 14:14:21 kafka | [2024-04-09 14:12:24,997] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-43 (state.change.logger) 14:14:21 policy-db-migrator | -------------- 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.291664169Z level=info msg="Migration successfully executed" id="create alert_image table" duration=1.000318ms 14:14:21 policy-pap | interceptor.classes = [] 14:14:21 kafka | [2024-04-09 14:12:24,997] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-10 (state.change.logger) 14:14:21 policy-db-migrator | 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.295130972Z level=info msg="Executing migration" id="add unique index on token to alert_image table" 14:14:21 policy-pap | internal.leave.group.on.close = true 14:14:21 kafka | [2024-04-09 14:12:24,997] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-22 (state.change.logger) 14:14:21 policy-db-migrator | -------------- 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.296248222Z level=info msg="Migration successfully executed" id="add unique index on token to alert_image table" duration=1.11706ms 14:14:21 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 14:14:21 kafka | [2024-04-09 14:12:24,997] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-18 (state.change.logger) 14:14:21 policy-db-migrator | ALTER TABLE pdpstatistics ADD CONSTRAINT PK_PDPSTATISTICS PRIMARY KEY (ID, name, version) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.30051651Z level=info msg="Executing migration" id="support longer URLs in alert_image table" 14:14:21 policy-pap | isolation.level = read_uncommitted 14:14:21 kafka | [2024-04-09 14:12:24,997] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-31 (state.change.logger) 14:14:21 policy-db-migrator | -------------- 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.300608911Z level=info msg="Migration successfully executed" id="support longer URLs in alert_image table" duration=93.141µs 14:14:21 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 14:14:21 kafka | [2024-04-09 14:12:24,997] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-27 (state.change.logger) 14:14:21 policy-db-migrator | 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.305599453Z level=info msg="Executing migration" id=create_alert_configuration_history_table 14:14:21 policy-pap | max.partition.fetch.bytes = 1048576 14:14:21 kafka | [2024-04-09 14:12:24,997] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-39 (state.change.logger) 14:14:21 policy-db-migrator | 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.306660702Z level=info msg="Migration successfully executed" id=create_alert_configuration_history_table duration=1.06116ms 14:14:21 policy-pap | max.poll.interval.ms = 300000 14:14:21 kafka | [2024-04-09 14:12:24,997] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-6 (state.change.logger) 14:14:21 policy-db-migrator | > upgrade 0150-pdpstatistics.sql 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.311331897Z level=info msg="Executing migration" id="drop non-unique orgID index on alert_configuration" 14:14:21 policy-pap | max.poll.records = 500 14:14:21 policy-db-migrator | -------------- 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.312885605Z level=info msg="Migration successfully executed" id="drop non-unique orgID index on alert_configuration" duration=1.553858ms 14:14:21 kafka | [2024-04-09 14:12:24,998] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-35 (state.change.logger) 14:14:21 policy-pap | metadata.max.age.ms = 300000 14:14:21 policy-db-migrator | ALTER TABLE pdpstatistics MODIFY COLUMN timeStamp datetime(6) NULL 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.317675993Z level=info msg="Executing migration" id="drop unique orgID index on alert_configuration if exists" 14:14:21 kafka | [2024-04-09 14:12:24,998] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition policy-pdp-pap-0 (state.change.logger) 14:14:21 policy-pap | metric.reporters = [] 14:14:21 policy-db-migrator | -------------- 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.318450607Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop unique orgID index on alert_configuration if exists" 14:14:21 kafka | [2024-04-09 14:12:24,998] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-2 (state.change.logger) 14:14:21 policy-pap | metrics.num.samples = 2 14:14:21 policy-db-migrator | 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.322299087Z level=info msg="Executing migration" id="extract alertmanager configuration history to separate table" 14:14:21 kafka | [2024-04-09 14:12:25,000] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 51 become-leader and 0 become-follower partitions (state.change.logger) 14:14:21 policy-pap | metrics.recording.level = INFO 14:14:21 policy-db-migrator | 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.322757745Z level=info msg="Migration successfully executed" id="extract alertmanager configuration history to separate table" duration=459.458µs 14:14:21 kafka | [2024-04-09 14:12:25,006] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 51 partitions (state.change.logger) 14:14:21 policy-pap | metrics.sample.window.ms = 30000 14:14:21 policy-db-migrator | > upgrade 0160-jpapdpstatistics_enginestats.sql 14:14:21 kafka | [2024-04-09 14:12:25,009] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NewReplica to OnlineReplica (state.change.logger) 14:14:21 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.326485573Z level=info msg="Executing migration" id="add unique index on orgID to alert_configuration" 14:14:21 policy-db-migrator | -------------- 14:14:21 kafka | [2024-04-09 14:12:25,011] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NewReplica to OnlineReplica (state.change.logger) 14:14:21 policy-pap | receive.buffer.bytes = 65536 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.327597603Z level=info msg="Migration successfully executed" id="add unique index on orgID to alert_configuration" duration=1.11124ms 14:14:21 policy-db-migrator | ALTER TABLE jpapdpstatistics_enginestats ADD COLUMN ID BIGINT DEFAULT NULL AFTER UPTIME 14:14:21 kafka | [2024-04-09 14:12:25,011] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NewReplica to OnlineReplica (state.change.logger) 14:14:21 policy-pap | reconnect.backoff.max.ms = 1000 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.332022964Z level=info msg="Executing migration" id="add last_applied column to alert_configuration_history" 14:14:21 policy-db-migrator | -------------- 14:14:21 kafka | [2024-04-09 14:12:25,011] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NewReplica to OnlineReplica (state.change.logger) 14:14:21 policy-pap | reconnect.backoff.ms = 50 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.343356131Z level=info msg="Migration successfully executed" id="add last_applied column to alert_configuration_history" duration=11.333637ms 14:14:21 policy-db-migrator | 14:14:21 kafka | [2024-04-09 14:12:25,011] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NewReplica to OnlineReplica (state.change.logger) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.348443693Z level=info msg="Executing migration" id="create library_element table v1" 14:14:21 policy-db-migrator | 14:14:21 kafka | [2024-04-09 14:12:25,011] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NewReplica to OnlineReplica (state.change.logger) 14:14:21 policy-pap | request.timeout.ms = 30000 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.349213797Z level=info msg="Migration successfully executed" id="create library_element table v1" duration=1.289473ms 14:14:21 policy-db-migrator | > upgrade 0170-jpapdpstatistics_enginestats.sql 14:14:21 kafka | [2024-04-09 14:12:25,011] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NewReplica to OnlineReplica (state.change.logger) 14:14:21 policy-pap | retry.backoff.ms = 100 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.35266416Z level=info msg="Executing migration" id="add index library_element org_id-folder_id-name-kind" 14:14:21 policy-db-migrator | -------------- 14:14:21 kafka | [2024-04-09 14:12:25,011] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NewReplica to OnlineReplica (state.change.logger) 14:14:21 policy-pap | sasl.client.callback.handler.class = null 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.35374203Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_id-name-kind" duration=1.07778ms 14:14:21 policy-db-migrator | UPDATE jpapdpstatistics_enginestats a 14:14:21 kafka | [2024-04-09 14:12:25,011] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NewReplica to OnlineReplica (state.change.logger) 14:14:21 policy-pap | sasl.jaas.config = null 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.358954005Z level=info msg="Executing migration" id="create library_element_connection table v1" 14:14:21 policy-db-migrator | JOIN pdpstatistics b 14:14:21 kafka | [2024-04-09 14:12:25,011] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NewReplica to OnlineReplica (state.change.logger) 14:14:21 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.35978673Z level=info msg="Migration successfully executed" id="create library_element_connection table v1" duration=830.605µs 14:14:21 policy-db-migrator | ON a.name = b.name AND a.version = b.version AND a.timeStamp = b.timeStamp 14:14:21 kafka | [2024-04-09 14:12:25,011] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NewReplica to OnlineReplica (state.change.logger) 14:14:21 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.362750804Z level=info msg="Executing migration" id="add index library_element_connection element_id-kind-connection_id" 14:14:21 policy-db-migrator | SET a.id = b.id 14:14:21 kafka | [2024-04-09 14:12:25,011] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NewReplica to OnlineReplica (state.change.logger) 14:14:21 policy-pap | sasl.kerberos.service.name = null 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.363818523Z level=info msg="Migration successfully executed" id="add index library_element_connection element_id-kind-connection_id" duration=1.067349ms 14:14:21 policy-db-migrator | -------------- 14:14:21 kafka | [2024-04-09 14:12:25,011] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NewReplica to OnlineReplica (state.change.logger) 14:14:21 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.36691402Z level=info msg="Executing migration" id="add unique index library_element org_id_uid" 14:14:21 policy-db-migrator | 14:14:21 kafka | [2024-04-09 14:12:25,011] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NewReplica to OnlineReplica (state.change.logger) 14:14:21 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.367965869Z level=info msg="Migration successfully executed" id="add unique index library_element org_id_uid" duration=1.051259ms 14:14:21 policy-db-migrator | 14:14:21 kafka | [2024-04-09 14:12:25,011] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NewReplica to OnlineReplica (state.change.logger) 14:14:21 policy-pap | sasl.login.callback.handler.class = null 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.371921291Z level=info msg="Executing migration" id="increase max description length to 2048" 14:14:21 policy-db-migrator | > upgrade 0180-jpapdpstatistics_enginestats.sql 14:14:21 kafka | [2024-04-09 14:12:25,011] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NewReplica to OnlineReplica (state.change.logger) 14:14:21 policy-pap | sasl.login.class = null 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.372005983Z level=info msg="Migration successfully executed" id="increase max description length to 2048" duration=83.362µs 14:14:21 policy-db-migrator | -------------- 14:14:21 kafka | [2024-04-09 14:12:25,012] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NewReplica to OnlineReplica (state.change.logger) 14:14:21 policy-pap | sasl.login.connect.timeout.ms = null 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.375287233Z level=info msg="Executing migration" id="alter library_element model to mediumtext" 14:14:21 policy-db-migrator | ALTER TABLE jpapdpstatistics_enginestats DROP COLUMN timeStamp 14:14:21 kafka | [2024-04-09 14:12:25,012] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NewReplica to OnlineReplica (state.change.logger) 14:14:21 policy-pap | sasl.login.read.timeout.ms = null 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.375376894Z level=info msg="Migration successfully executed" id="alter library_element model to mediumtext" duration=89.251µs 14:14:21 policy-db-migrator | -------------- 14:14:21 kafka | [2024-04-09 14:12:25,012] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NewReplica to OnlineReplica (state.change.logger) 14:14:21 policy-pap | sasl.login.refresh.buffer.seconds = 300 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.382130097Z level=info msg="Executing migration" id="clone move dashboard alerts to unified alerting" 14:14:21 policy-db-migrator | 14:14:21 kafka | [2024-04-09 14:12:25,012] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NewReplica to OnlineReplica (state.change.logger) 14:14:21 policy-pap | sasl.login.refresh.min.period.seconds = 60 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.382466173Z level=info msg="Migration successfully executed" id="clone move dashboard alerts to unified alerting" duration=360.756µs 14:14:21 policy-db-migrator | 14:14:21 kafka | [2024-04-09 14:12:25,012] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NewReplica to OnlineReplica (state.change.logger) 14:14:21 policy-pap | sasl.login.refresh.window.factor = 0.8 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.389453901Z level=info msg="Executing migration" id="create data_keys table" 14:14:21 policy-db-migrator | > upgrade 0190-jpapolicyaudit.sql 14:14:21 kafka | [2024-04-09 14:12:25,012] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NewReplica to OnlineReplica (state.change.logger) 14:14:21 policy-pap | sasl.login.refresh.window.jitter = 0.05 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.391279744Z level=info msg="Migration successfully executed" id="create data_keys table" duration=1.829243ms 14:14:21 policy-db-migrator | -------------- 14:14:21 kafka | [2024-04-09 14:12:25,012] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NewReplica to OnlineReplica (state.change.logger) 14:14:21 policy-pap | sasl.login.retry.backoff.max.ms = 10000 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.395781606Z level=info msg="Executing migration" id="create secrets table" 14:14:21 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapolicyaudit (ACTION INT DEFAULT NULL, PDPGROUP VARCHAR(255) NULL, PDPTYPE VARCHAR(255) NULL, TIMESTAMP datetime DEFAULT NULL, USER VARCHAR(255) NULL, ID BIGINT NOT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_JPAPOLICYAUDIT (ID, name, version)) 14:14:21 kafka | [2024-04-09 14:12:25,012] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NewReplica to OnlineReplica (state.change.logger) 14:14:21 policy-pap | sasl.login.retry.backoff.ms = 100 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.396886686Z level=info msg="Migration successfully executed" id="create secrets table" duration=1.10432ms 14:14:21 policy-db-migrator | -------------- 14:14:21 kafka | [2024-04-09 14:12:25,012] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NewReplica to OnlineReplica (state.change.logger) 14:14:21 policy-pap | sasl.mechanism = GSSAPI 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.400084794Z level=info msg="Executing migration" id="rename data_keys name column to id" 14:14:21 policy-db-migrator | 14:14:21 kafka | [2024-04-09 14:12:25,012] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NewReplica to OnlineReplica (state.change.logger) 14:14:21 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.43333202Z level=info msg="Migration successfully executed" id="rename data_keys name column to id" duration=33.247456ms 14:14:21 policy-db-migrator | 14:14:21 kafka | [2024-04-09 14:12:25,012] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NewReplica to OnlineReplica (state.change.logger) 14:14:21 policy-pap | sasl.oauthbearer.expected.audience = null 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.43662772Z level=info msg="Executing migration" id="add name column into data_keys" 14:14:21 policy-db-migrator | > upgrade 0200-JpaPolicyAuditIndex_timestamp.sql 14:14:21 kafka | [2024-04-09 14:12:25,012] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NewReplica to OnlineReplica (state.change.logger) 14:14:21 policy-pap | sasl.oauthbearer.expected.issuer = null 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.442452547Z level=info msg="Migration successfully executed" id="add name column into data_keys" duration=5.822747ms 14:14:21 policy-db-migrator | -------------- 14:14:21 kafka | [2024-04-09 14:12:25,012] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NewReplica to OnlineReplica (state.change.logger) 14:14:21 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.447402677Z level=info msg="Executing migration" id="copy data_keys id column values into name" 14:14:21 policy-db-migrator | CREATE INDEX JpaPolicyAuditIndex_timestamp ON jpapolicyaudit(TIMESTAMP) 14:14:21 kafka | [2024-04-09 14:12:25,012] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NewReplica to OnlineReplica (state.change.logger) 14:14:21 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.44754581Z level=info msg="Migration successfully executed" id="copy data_keys id column values into name" duration=143.422µs 14:14:21 policy-db-migrator | -------------- 14:14:21 kafka | [2024-04-09 14:12:25,012] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NewReplica to OnlineReplica (state.change.logger) 14:14:21 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.450709077Z level=info msg="Executing migration" id="rename data_keys name column to label" 14:14:21 policy-db-migrator | 14:14:21 kafka | [2024-04-09 14:12:25,012] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NewReplica to OnlineReplica (state.change.logger) 14:14:21 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.484048125Z level=info msg="Migration successfully executed" id="rename data_keys name column to label" duration=33.340798ms 14:14:21 policy-db-migrator | 14:14:21 kafka | [2024-04-09 14:12:25,012] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NewReplica to OnlineReplica (state.change.logger) 14:14:21 policy-pap | sasl.oauthbearer.scope.claim.name = scope 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.486954388Z level=info msg="Executing migration" id="rename data_keys id column back to name" 14:14:21 policy-db-migrator | > upgrade 0210-sequence.sql 14:14:21 kafka | [2024-04-09 14:12:25,012] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NewReplica to OnlineReplica (state.change.logger) 14:14:21 policy-pap | sasl.oauthbearer.sub.claim.name = sub 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.515815904Z level=info msg="Migration successfully executed" id="rename data_keys id column back to name" duration=28.860496ms 14:14:21 policy-db-migrator | -------------- 14:14:21 kafka | [2024-04-09 14:12:25,012] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NewReplica to OnlineReplica (state.change.logger) 14:14:21 policy-pap | sasl.oauthbearer.token.endpoint.url = null 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.519173715Z level=info msg="Executing migration" id="create kv_store table v1" 14:14:21 policy-db-migrator | CREATE TABLE IF NOT EXISTS sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) 14:14:21 kafka | [2024-04-09 14:12:25,012] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NewReplica to OnlineReplica (state.change.logger) 14:14:21 policy-pap | security.protocol = PLAINTEXT 14:14:21 policy-db-migrator | -------------- 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.519842807Z level=info msg="Migration successfully executed" id="create kv_store table v1" duration=669.172µs 14:14:21 policy-pap | security.providers = null 14:14:21 kafka | [2024-04-09 14:12:25,012] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NewReplica to OnlineReplica (state.change.logger) 14:14:21 policy-db-migrator | 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.52382377Z level=info msg="Executing migration" id="add index kv_store.org_id-namespace-key" 14:14:21 policy-pap | send.buffer.bytes = 131072 14:14:21 kafka | [2024-04-09 14:12:25,012] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NewReplica to OnlineReplica (state.change.logger) 14:14:21 policy-db-migrator | 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.524864879Z level=info msg="Migration successfully executed" id="add index kv_store.org_id-namespace-key" duration=1.040488ms 14:14:21 policy-pap | session.timeout.ms = 45000 14:14:21 kafka | [2024-04-09 14:12:25,012] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NewReplica to OnlineReplica (state.change.logger) 14:14:21 policy-db-migrator | > upgrade 0220-sequence.sql 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.529033984Z level=info msg="Executing migration" id="update dashboard_uid and panel_id from existing annotations" 14:14:21 policy-pap | socket.connection.setup.timeout.max.ms = 30000 14:14:21 kafka | [2024-04-09 14:12:25,012] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NewReplica to OnlineReplica (state.change.logger) 14:14:21 policy-db-migrator | -------------- 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.529239508Z level=info msg="Migration successfully executed" id="update dashboard_uid and panel_id from existing annotations" duration=205.854µs 14:14:21 policy-pap | socket.connection.setup.timeout.ms = 10000 14:14:21 kafka | [2024-04-09 14:12:25,012] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NewReplica to OnlineReplica (state.change.logger) 14:14:21 policy-db-migrator | INSERT INTO sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM pdpstatistics)) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.532219552Z level=info msg="Executing migration" id="create permission table" 14:14:21 policy-pap | ssl.cipher.suites = null 14:14:21 kafka | [2024-04-09 14:12:25,012] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NewReplica to OnlineReplica (state.change.logger) 14:14:21 policy-db-migrator | -------------- 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.533701519Z level=info msg="Migration successfully executed" id="create permission table" duration=1.481567ms 14:14:21 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 14:14:21 kafka | [2024-04-09 14:12:25,012] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NewReplica to OnlineReplica (state.change.logger) 14:14:21 policy-db-migrator | 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.538864934Z level=info msg="Executing migration" id="add unique index permission.role_id" 14:14:21 policy-pap | ssl.endpoint.identification.algorithm = https 14:14:21 kafka | [2024-04-09 14:12:25,012] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NewReplica to OnlineReplica (state.change.logger) 14:14:21 policy-db-migrator | 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.540383371Z level=info msg="Migration successfully executed" id="add unique index permission.role_id" duration=1.518247ms 14:14:21 policy-pap | ssl.engine.factory.class = null 14:14:21 kafka | [2024-04-09 14:12:25,012] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NewReplica to OnlineReplica (state.change.logger) 14:14:21 policy-db-migrator | > upgrade 0100-jpatoscapolicy_targets.sql 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.543986387Z level=info msg="Executing migration" id="add unique index role_id_action_scope" 14:14:21 policy-pap | ssl.key.password = null 14:14:21 kafka | [2024-04-09 14:12:25,012] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NewReplica to OnlineReplica (state.change.logger) 14:14:21 policy-db-migrator | -------------- 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.544995215Z level=info msg="Migration successfully executed" id="add unique index role_id_action_scope" duration=1.008318ms 14:14:21 policy-pap | ssl.keymanager.algorithm = SunX509 14:14:21 kafka | [2024-04-09 14:12:25,012] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NewReplica to OnlineReplica (state.change.logger) 14:14:21 policy-db-migrator | ALTER TABLE jpatoscapolicy_targets ADD COLUMN toscaPolicyName VARCHAR(120) NOT NULL, ADD COLUMN toscaPolicyVersion VARCHAR(20) NOT NULL, ADD CONSTRAINT PK_JPATOSCAPOLICY_TARGETS PRIMARY KEY (toscaPolicyName, toscaPolicyVersion) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.548051941Z level=info msg="Executing migration" id="create role table" 14:14:21 policy-pap | ssl.keystore.certificate.chain = null 14:14:21 kafka | [2024-04-09 14:12:25,012] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NewReplica to OnlineReplica (state.change.logger) 14:14:21 policy-db-migrator | -------------- 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.54909552Z level=info msg="Migration successfully executed" id="create role table" duration=1.040479ms 14:14:21 policy-pap | ssl.keystore.key = null 14:14:21 kafka | [2024-04-09 14:12:25,012] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NewReplica to OnlineReplica (state.change.logger) 14:14:21 policy-db-migrator | 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.553029252Z level=info msg="Executing migration" id="add column display_name" 14:14:21 policy-pap | ssl.keystore.location = null 14:14:21 kafka | [2024-04-09 14:12:25,012] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NewReplica to OnlineReplica (state.change.logger) 14:14:21 policy-db-migrator | 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.560827614Z level=info msg="Migration successfully executed" id="add column display_name" duration=7.797772ms 14:14:21 policy-pap | ssl.keystore.password = null 14:14:21 kafka | [2024-04-09 14:12:25,012] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NewReplica to OnlineReplica (state.change.logger) 14:14:21 policy-db-migrator | > upgrade 0110-jpatoscapolicytype_targets.sql 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.564077053Z level=info msg="Executing migration" id="add column group_name" 14:14:21 policy-pap | ssl.keystore.type = JKS 14:14:21 kafka | [2024-04-09 14:12:25,012] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 14:14:21 policy-db-migrator | -------------- 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.571251934Z level=info msg="Migration successfully executed" id="add column group_name" duration=7.173861ms 14:14:21 policy-pap | ssl.protocol = TLSv1.3 14:14:21 kafka | [2024-04-09 14:12:25,017] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 for 51 partitions (state.change.logger) 14:14:21 policy-db-migrator | ALTER TABLE jpatoscapolicytype_targets ADD COLUMN toscaPolicyTypeName VARCHAR(120) NOT NULL, ADD COLUMN toscaPolicyTypeVersion VARCHAR(20) NOT NULL, ADD CONSTRAINT PK_JPATOSCAPOLICYTYPE_TARGETS PRIMARY KEY (toscaPolicyTypeName, toscaPolicyTypeVersion) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.57705257Z level=info msg="Executing migration" id="add index role.org_id" 14:14:21 policy-pap | ssl.provider = null 14:14:21 kafka | [2024-04-09 14:12:25,018] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 14:14:21 policy-db-migrator | -------------- 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.578007797Z level=info msg="Migration successfully executed" id="add index role.org_id" duration=954.767µs 14:14:21 policy-pap | ssl.secure.random.implementation = null 14:14:21 kafka | [2024-04-09 14:12:25,018] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 14:14:21 policy-db-migrator | 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.621886336Z level=info msg="Executing migration" id="add unique index role_org_id_name" 14:14:21 policy-pap | ssl.trustmanager.algorithm = PKIX 14:14:21 kafka | [2024-04-09 14:12:25,018] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 14:14:21 policy-db-migrator | 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.624231539Z level=info msg="Migration successfully executed" id="add unique index role_org_id_name" duration=2.345553ms 14:14:21 policy-pap | ssl.truststore.certificates = null 14:14:21 kafka | [2024-04-09 14:12:25,018] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 14:14:21 policy-db-migrator | > upgrade 0120-toscatrigger.sql 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.629115748Z level=info msg="Executing migration" id="add index role_org_id_uid" 14:14:21 policy-pap | ssl.truststore.location = null 14:14:21 kafka | [2024-04-09 14:12:25,018] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 14:14:21 policy-db-migrator | -------------- 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.630459563Z level=info msg="Migration successfully executed" id="add index role_org_id_uid" duration=1.342765ms 14:14:21 policy-pap | ssl.truststore.password = null 14:14:21 kafka | [2024-04-09 14:12:25,018] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 14:14:21 policy-db-migrator | DROP TABLE IF EXISTS toscatrigger 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.633802274Z level=info msg="Executing migration" id="create team role table" 14:14:21 policy-pap | ssl.truststore.type = JKS 14:14:21 kafka | [2024-04-09 14:12:25,018] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 14:14:21 policy-db-migrator | -------------- 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.63471886Z level=info msg="Migration successfully executed" id="create team role table" duration=917.186µs 14:14:21 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 14:14:21 kafka | [2024-04-09 14:12:25,018] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 14:14:21 policy-db-migrator | 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.641126647Z level=info msg="Executing migration" id="add index team_role.org_id" 14:14:21 policy-pap | 14:14:21 kafka | [2024-04-09 14:12:25,018] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 14:14:21 policy-db-migrator | 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.642272458Z level=info msg="Migration successfully executed" id="add index team_role.org_id" duration=1.145751ms 14:14:21 policy-pap | [2024-04-09T14:12:22.643+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 14:14:21 kafka | [2024-04-09 14:12:25,018] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 14:14:21 policy-db-migrator | > upgrade 0130-jpatoscapolicytype_triggers.sql 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.650515759Z level=info msg="Executing migration" id="add unique index team_role_org_id_team_id_role_id" 14:14:21 policy-pap | [2024-04-09T14:12:22.644+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 14:14:21 kafka | [2024-04-09 14:12:25,018] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 14:14:21 policy-db-migrator | -------------- 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.652383193Z level=info msg="Migration successfully executed" id="add unique index team_role_org_id_team_id_role_id" duration=1.863194ms 14:14:21 policy-pap | [2024-04-09T14:12:22.644+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1712671942643 14:14:21 policy-db-migrator | ALTER TABLE jpatoscapolicytype_triggers MODIFY COLUMN triggers LONGBLOB 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.655817205Z level=info msg="Executing migration" id="add index team_role.team_id" 14:14:21 kafka | [2024-04-09 14:12:25,018] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 14:14:21 policy-pap | [2024-04-09T14:12:22.644+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-policy-pap-2, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap 14:14:21 policy-db-migrator | -------------- 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.656911815Z level=info msg="Migration successfully executed" id="add index team_role.team_id" duration=1.09405ms 14:14:21 policy-db-migrator | 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.660626602Z level=info msg="Executing migration" id="create user role table" 14:14:21 kafka | [2024-04-09 14:12:25,019] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 14:14:21 policy-pap | [2024-04-09T14:12:22.985+00:00|INFO|PapDatabaseInitializer|main] Created initial pdpGroup in DB - PdpGroups(groups=[PdpGroup(name=defaultGroup, description=The default group that registers all supported policy types and pdps., pdpGroupState=ACTIVE, properties=null, pdpSubgroups=[PdpSubGroup(pdpType=apex, supportedPolicyTypes=[onap.policies.controlloop.operational.common.Apex 1.0.0, onap.policies.native.Apex 1.0.0], policies=[], currentInstanceCount=0, desiredInstanceCount=1, properties=null, pdpInstances=null)])]) from /opt/app/policy/pap/etc/mounted/groups.json 14:14:21 policy-db-migrator | 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.661512439Z level=info msg="Migration successfully executed" id="create user role table" duration=885.267µs 14:14:21 kafka | [2024-04-09 14:12:25,019] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 14:14:21 policy-pap | [2024-04-09T14:12:23.119+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning 14:14:21 policy-db-migrator | > upgrade 0140-toscaparameter.sql 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.666997139Z level=info msg="Executing migration" id="add index user_role.org_id" 14:14:21 kafka | [2024-04-09 14:12:25,019] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 14:14:21 policy-pap | [2024-04-09T14:12:23.353+00:00|INFO|DefaultSecurityFilterChain|main] Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@53917c92, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@1fa796a4, org.springframework.security.web.context.SecurityContextHolderFilter@1f013047, org.springframework.security.web.header.HeaderWriterFilter@ce0bbd5, org.springframework.security.web.authentication.logout.LogoutFilter@44c2e8a8, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@4fbbd98c, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@51566ce0, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@17e6d07b, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@68de8522, org.springframework.security.web.access.ExceptionTranslationFilter@1f7557fe, org.springframework.security.web.access.intercept.AuthorizationFilter@3879feec] 14:14:21 policy-db-migrator | -------------- 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.668248321Z level=info msg="Migration successfully executed" id="add index user_role.org_id" duration=1.253772ms 14:14:21 kafka | [2024-04-09 14:12:25,019] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 14:14:21 policy-pap | [2024-04-09T14:12:24.127+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoint(s) beneath base path '' 14:14:21 policy-db-migrator | DROP TABLE IF EXISTS toscaparameter 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.671491461Z level=info msg="Executing migration" id="add unique index user_role_org_id_user_id_role_id" 14:14:21 kafka | [2024-04-09 14:12:25,019] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 14:14:21 policy-pap | [2024-04-09T14:12:24.226+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] 14:14:21 policy-db-migrator | -------------- 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.672583101Z level=info msg="Migration successfully executed" id="add unique index user_role_org_id_user_id_role_id" duration=1.09104ms 14:14:21 kafka | [2024-04-09 14:12:25,019] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 14:14:21 policy-pap | [2024-04-09T14:12:24.246+00:00|INFO|TomcatWebServer|main] Tomcat started on port(s): 6969 (http) with context path '/policy/pap/v1' 14:14:21 policy-db-migrator | 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.677759395Z level=info msg="Executing migration" id="add index user_role.user_id" 14:14:21 kafka | [2024-04-09 14:12:25,019] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 14:14:21 policy-pap | [2024-04-09T14:12:24.268+00:00|INFO|ServiceManager|main] Policy PAP starting 14:14:21 policy-db-migrator | 14:14:21 policy-pap | [2024-04-09T14:12:24.268+00:00|INFO|ServiceManager|main] Policy PAP starting Meter Registry 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.679863583Z level=info msg="Migration successfully executed" id="add index user_role.user_id" duration=2.104548ms 14:14:21 kafka | [2024-04-09 14:12:25,019] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 14:14:21 policy-db-migrator | > upgrade 0150-toscaproperty.sql 14:14:21 policy-pap | [2024-04-09T14:12:24.269+00:00|INFO|ServiceManager|main] Policy PAP starting PAP parameters 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.685663639Z level=info msg="Executing migration" id="create builtin role table" 14:14:21 kafka | [2024-04-09 14:12:25,019] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-pap | [2024-04-09T14:12:24.270+00:00|INFO|ServiceManager|main] Policy PAP starting Pdp Heartbeat Listener 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.687066025Z level=info msg="Migration successfully executed" id="create builtin role table" duration=1.402256ms 14:14:21 kafka | [2024-04-09 14:12:25,019] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 14:14:21 policy-db-migrator | DROP TABLE IF EXISTS jpatoscaproperty_constraints 14:14:21 policy-pap | [2024-04-09T14:12:24.270+00:00|INFO|ServiceManager|main] Policy PAP starting Response Request ID Dispatcher 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.695985027Z level=info msg="Executing migration" id="add index builtin_role.role_id" 14:14:21 kafka | [2024-04-09 14:12:25,019] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-pap | [2024-04-09T14:12:24.270+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Request ID Dispatcher 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.697468544Z level=info msg="Migration successfully executed" id="add index builtin_role.role_id" duration=1.510938ms 14:14:21 kafka | [2024-04-09 14:12:25,019] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 14:14:21 policy-db-migrator | 14:14:21 policy-pap | [2024-04-09T14:12:24.270+00:00|INFO|ServiceManager|main] Policy PAP starting Response Message Dispatcher 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.701536628Z level=info msg="Executing migration" id="add index builtin_role.name" 14:14:21 kafka | [2024-04-09 14:12:25,019] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-pap | [2024-04-09T14:12:24.274+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=8886bf5a-38da-4c7c-af7d-ca09814a22ad, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@3ff3275b 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.702899003Z level=info msg="Migration successfully executed" id="add index builtin_role.name" duration=1.361605ms 14:14:21 kafka | [2024-04-09 14:12:25,019] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 14:14:21 policy-db-migrator | DROP TABLE IF EXISTS jpatoscaproperty_metadata 14:14:21 policy-pap | [2024-04-09T14:12:24.284+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=8886bf5a-38da-4c7c-af7d-ca09814a22ad, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.757776893Z level=info msg="Executing migration" id="Add column org_id to builtin_role table" 14:14:21 kafka | [2024-04-09 14:12:25,019] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 14:14:21 policy-pap | [2024-04-09T14:12:24.285+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.768630081Z level=info msg="Migration successfully executed" id="Add column org_id to builtin_role table" duration=10.856348ms 14:14:21 policy-db-migrator | -------------- 14:14:21 kafka | [2024-04-09 14:12:25,019] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 14:14:21 policy-pap | allow.auto.create.topics = true 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.773777885Z level=info msg="Executing migration" id="add index builtin_role.org_id" 14:14:21 policy-db-migrator | 14:14:21 kafka | [2024-04-09 14:12:25,019] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 14:14:21 policy-pap | auto.commit.interval.ms = 5000 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.774876785Z level=info msg="Migration successfully executed" id="add index builtin_role.org_id" duration=1.09879ms 14:14:21 policy-db-migrator | -------------- 14:14:21 kafka | [2024-04-09 14:12:25,019] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 14:14:21 policy-pap | auto.include.jmx.reporter = true 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.779944517Z level=info msg="Executing migration" id="add unique index builtin_role_org_id_role_id_role" 14:14:21 policy-db-migrator | DROP TABLE IF EXISTS toscaproperty 14:14:21 kafka | [2024-04-09 14:12:25,019] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 14:14:21 policy-pap | auto.offset.reset = latest 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.78117412Z level=info msg="Migration successfully executed" id="add unique index builtin_role_org_id_role_id_role" duration=1.229053ms 14:14:21 policy-db-migrator | -------------- 14:14:21 kafka | [2024-04-09 14:12:25,019] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 14:14:21 policy-pap | bootstrap.servers = [kafka:9092] 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.784348987Z level=info msg="Executing migration" id="Remove unique index role_org_id_uid" 14:14:21 policy-db-migrator | 14:14:21 kafka | [2024-04-09 14:12:25,019] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 14:14:21 policy-pap | check.crcs = true 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.785653471Z level=info msg="Migration successfully executed" id="Remove unique index role_org_id_uid" duration=1.303924ms 14:14:21 policy-db-migrator | 14:14:21 kafka | [2024-04-09 14:12:25,019] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 14:14:21 policy-pap | client.dns.lookup = use_all_dns_ips 14:14:21 policy-db-migrator | > upgrade 0160-jpapolicyaudit_pk.sql 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.78997545Z level=info msg="Executing migration" id="add unique index role.uid" 14:14:21 kafka | [2024-04-09 14:12:25,019] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 14:14:21 policy-pap | client.id = consumer-8886bf5a-38da-4c7c-af7d-ca09814a22ad-3 14:14:21 policy-db-migrator | -------------- 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.79108953Z level=info msg="Migration successfully executed" id="add unique index role.uid" duration=1.11447ms 14:14:21 kafka | [2024-04-09 14:12:25,019] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 14:14:21 policy-pap | client.rack = 14:14:21 policy-db-migrator | ALTER TABLE jpapolicyaudit DROP PRIMARY KEY 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.794699886Z level=info msg="Executing migration" id="create seed assignment table" 14:14:21 kafka | [2024-04-09 14:12:25,019] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 14:14:21 policy-pap | connections.max.idle.ms = 540000 14:14:21 policy-db-migrator | -------------- 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.795552731Z level=info msg="Migration successfully executed" id="create seed assignment table" duration=852.205µs 14:14:21 kafka | [2024-04-09 14:12:25,019] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 14:14:21 policy-pap | default.api.timeout.ms = 60000 14:14:21 policy-db-migrator | 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.798487095Z level=info msg="Executing migration" id="add unique index builtin_role_role_name" 14:14:21 kafka | [2024-04-09 14:12:25,019] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 14:14:21 policy-pap | enable.auto.commit = true 14:14:21 policy-db-migrator | -------------- 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.799638366Z level=info msg="Migration successfully executed" id="add unique index builtin_role_role_name" duration=1.150171ms 14:14:21 kafka | [2024-04-09 14:12:25,019] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 14:14:21 policy-pap | exclude.internal.topics = true 14:14:21 policy-db-migrator | ALTER TABLE jpapolicyaudit ADD CONSTRAINT PK_JPAPOLICYAUDIT PRIMARY KEY (ID) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.803004037Z level=info msg="Executing migration" id="add column hidden to role table" 14:14:21 kafka | [2024-04-09 14:12:25,019] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 14:14:21 policy-pap | fetch.max.bytes = 52428800 14:14:21 policy-db-migrator | -------------- 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.811464042Z level=info msg="Migration successfully executed" id="add column hidden to role table" duration=8.457645ms 14:14:21 kafka | [2024-04-09 14:12:25,019] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 14:14:21 policy-pap | fetch.max.wait.ms = 500 14:14:21 policy-db-migrator | 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.815878332Z level=info msg="Executing migration" id="permission kind migration" 14:14:21 kafka | [2024-04-09 14:12:25,019] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 14:14:21 policy-pap | fetch.min.bytes = 1 14:14:21 policy-db-migrator | 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.824317176Z level=info msg="Migration successfully executed" id="permission kind migration" duration=8.440124ms 14:14:21 kafka | [2024-04-09 14:12:25,019] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 14:14:21 policy-pap | group.id = 8886bf5a-38da-4c7c-af7d-ca09814a22ad 14:14:21 policy-db-migrator | > upgrade 0170-pdpstatistics_pk.sql 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.828671505Z level=info msg="Executing migration" id="permission attribute migration" 14:14:21 kafka | [2024-04-09 14:12:25,019] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 14:14:21 policy-pap | group.instance.id = null 14:14:21 policy-db-migrator | -------------- 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.834247257Z level=info msg="Migration successfully executed" id="permission attribute migration" duration=5.573212ms 14:14:21 kafka | [2024-04-09 14:12:25,019] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 14:14:21 policy-pap | heartbeat.interval.ms = 3000 14:14:21 policy-db-migrator | ALTER TABLE pdpstatistics DROP PRIMARY KEY 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.838467584Z level=info msg="Executing migration" id="permission identifier migration" 14:14:21 kafka | [2024-04-09 14:12:25,019] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 14:14:21 policy-pap | interceptor.classes = [] 14:14:21 policy-db-migrator | -------------- 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.847774503Z level=info msg="Migration successfully executed" id="permission identifier migration" duration=9.306569ms 14:14:21 kafka | [2024-04-09 14:12:25,019] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 14:14:21 policy-pap | internal.leave.group.on.close = true 14:14:21 policy-db-migrator | 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.852209214Z level=info msg="Executing migration" id="add permission identifier index" 14:14:21 kafka | [2024-04-09 14:12:25,019] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 14:14:21 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 14:14:21 policy-db-migrator | -------------- 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.853786623Z level=info msg="Migration successfully executed" id="add permission identifier index" duration=1.577049ms 14:14:21 kafka | [2024-04-09 14:12:25,019] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 14:14:21 policy-pap | isolation.level = read_uncommitted 14:14:21 policy-db-migrator | ALTER TABLE pdpstatistics ADD CONSTRAINT PK_PDPSTATISTICS PRIMARY KEY (ID) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.857950579Z level=info msg="Executing migration" id="add permission action scope role_id index" 14:14:21 kafka | [2024-04-09 14:12:25,019] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 14:14:21 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 14:14:21 policy-db-migrator | -------------- 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.859389235Z level=info msg="Migration successfully executed" id="add permission action scope role_id index" duration=1.437746ms 14:14:21 kafka | [2024-04-09 14:12:25,050] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-3 (state.change.logger) 14:14:21 policy-pap | max.partition.fetch.bytes = 1048576 14:14:21 policy-db-migrator | 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.862746366Z level=info msg="Executing migration" id="remove permission role_id action scope index" 14:14:21 kafka | [2024-04-09 14:12:25,050] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-18 (state.change.logger) 14:14:21 policy-pap | max.poll.interval.ms = 300000 14:14:21 policy-db-migrator | 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.863792365Z level=info msg="Migration successfully executed" id="remove permission role_id action scope index" duration=1.045909ms 14:14:21 kafka | [2024-04-09 14:12:25,050] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-41 (state.change.logger) 14:14:21 policy-pap | max.poll.records = 500 14:14:21 policy-db-migrator | > upgrade 0180-jpatoscanodetemplate_metadata.sql 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.867696326Z level=info msg="Executing migration" id="create query_history table v1" 14:14:21 kafka | [2024-04-09 14:12:25,051] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-10 (state.change.logger) 14:14:21 policy-pap | metadata.max.age.ms = 300000 14:14:21 policy-db-migrator | -------------- 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.868939379Z level=info msg="Migration successfully executed" id="create query_history table v1" duration=1.242493ms 14:14:21 kafka | [2024-04-09 14:12:25,051] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-33 (state.change.logger) 14:14:21 policy-pap | metric.reporters = [] 14:14:21 policy-db-migrator | ALTER TABLE jpatoscanodetemplate_metadata MODIFY COLUMN METADATA LONGTEXT 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.875918956Z level=info msg="Executing migration" id="add index query_history.org_id-created_by-datasource_uid" 14:14:21 kafka | [2024-04-09 14:12:25,051] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-48 (state.change.logger) 14:14:21 policy-pap | metrics.num.samples = 2 14:14:21 policy-db-migrator | -------------- 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.877543836Z level=info msg="Migration successfully executed" id="add index query_history.org_id-created_by-datasource_uid" duration=1.62391ms 14:14:21 kafka | [2024-04-09 14:12:25,051] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-19 (state.change.logger) 14:14:21 policy-db-migrator | 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.887442646Z level=info msg="Executing migration" id="alter table query_history alter column created_by type to bigint" 14:14:21 policy-pap | metrics.recording.level = INFO 14:14:21 kafka | [2024-04-09 14:12:25,051] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-34 (state.change.logger) 14:14:21 policy-db-migrator | 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.887583169Z level=info msg="Migration successfully executed" id="alter table query_history alter column created_by type to bigint" duration=144.283µs 14:14:21 policy-pap | metrics.sample.window.ms = 30000 14:14:21 kafka | [2024-04-09 14:12:25,051] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-4 (state.change.logger) 14:14:21 policy-db-migrator | > upgrade 0100-upgrade.sql 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.892445787Z level=info msg="Executing migration" id="rbac disabled migrator" 14:14:21 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 14:14:21 kafka | [2024-04-09 14:12:25,051] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-11 (state.change.logger) 14:14:21 policy-db-migrator | -------------- 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.892509368Z level=info msg="Migration successfully executed" id="rbac disabled migrator" duration=74.271µs 14:14:21 policy-pap | receive.buffer.bytes = 65536 14:14:21 kafka | [2024-04-09 14:12:25,051] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-26 (state.change.logger) 14:14:21 policy-db-migrator | select 'upgrade to 1100 completed' as msg 14:14:21 policy-pap | reconnect.backoff.max.ms = 1000 14:14:21 kafka | [2024-04-09 14:12:25,051] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-49 (state.change.logger) 14:14:21 policy-db-migrator | -------------- 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.898251833Z level=info msg="Executing migration" id="teams permissions migration" 14:14:21 policy-pap | reconnect.backoff.ms = 50 14:14:21 kafka | [2024-04-09 14:12:25,051] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-39 (state.change.logger) 14:14:21 policy-db-migrator | 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.899003077Z level=info msg="Migration successfully executed" id="teams permissions migration" duration=753.954µs 14:14:21 policy-pap | request.timeout.ms = 30000 14:14:21 kafka | [2024-04-09 14:12:25,051] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-9 (state.change.logger) 14:14:21 policy-db-migrator | msg 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.902732645Z level=info msg="Executing migration" id="dashboard permissions" 14:14:21 policy-pap | retry.backoff.ms = 100 14:14:21 kafka | [2024-04-09 14:12:25,051] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-24 (state.change.logger) 14:14:21 policy-db-migrator | upgrade to 1100 completed 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.903755724Z level=info msg="Migration successfully executed" id="dashboard permissions" duration=1.027778ms 14:14:21 policy-pap | sasl.client.callback.handler.class = null 14:14:21 kafka | [2024-04-09 14:12:25,051] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-31 (state.change.logger) 14:14:21 policy-db-migrator | 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.908905587Z level=info msg="Executing migration" id="dashboard permissions uid scopes" 14:14:21 policy-pap | sasl.jaas.config = null 14:14:21 kafka | [2024-04-09 14:12:25,051] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-46 (state.change.logger) 14:14:21 policy-db-migrator | > upgrade 0100-jpapolicyaudit_renameuser.sql 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.90960529Z level=info msg="Migration successfully executed" id="dashboard permissions uid scopes" duration=699.533µs 14:14:21 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 14:14:21 kafka | [2024-04-09 14:12:25,051] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-1 (state.change.logger) 14:14:21 policy-db-migrator | -------------- 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.912993922Z level=info msg="Executing migration" id="drop managed folder create actions" 14:14:21 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 14:14:21 kafka | [2024-04-09 14:12:25,051] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-16 (state.change.logger) 14:14:21 policy-db-migrator | ALTER TABLE jpapolicyaudit RENAME COLUMN USER TO USERNAME 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.913222926Z level=info msg="Migration successfully executed" id="drop managed folder create actions" duration=229.334µs 14:14:21 policy-pap | sasl.kerberos.service.name = null 14:14:21 kafka | [2024-04-09 14:12:25,051] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-2 (state.change.logger) 14:14:21 policy-db-migrator | -------------- 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.918200827Z level=info msg="Executing migration" id="alerting notification permissions" 14:14:21 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 14:14:21 kafka | [2024-04-09 14:12:25,051] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-25 (state.change.logger) 14:14:21 policy-db-migrator | 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.918682365Z level=info msg="Migration successfully executed" id="alerting notification permissions" duration=479.608µs 14:14:21 kafka | [2024-04-09 14:12:25,051] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-40 (state.change.logger) 14:14:21 policy-db-migrator | 14:14:21 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.924609713Z level=info msg="Executing migration" id="create query_history_star table v1" 14:14:21 kafka | [2024-04-09 14:12:25,051] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-47 (state.change.logger) 14:14:21 policy-db-migrator | > upgrade 0110-idx_tsidx1.sql 14:14:21 policy-pap | sasl.login.callback.handler.class = null 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.925887667Z level=info msg="Migration successfully executed" id="create query_history_star table v1" duration=1.278134ms 14:14:21 kafka | [2024-04-09 14:12:25,051] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-17 (state.change.logger) 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-pap | sasl.login.class = null 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.929313799Z level=info msg="Executing migration" id="add index query_history.user_id-query_uid" 14:14:21 kafka | [2024-04-09 14:12:25,051] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-32 (state.change.logger) 14:14:21 policy-pap | sasl.login.connect.timeout.ms = null 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.931149283Z level=info msg="Migration successfully executed" id="add index query_history.user_id-query_uid" duration=1.834714ms 14:14:21 policy-db-migrator | DROP INDEX IDX_TSIDX1 ON pdpstatistics 14:14:21 kafka | [2024-04-09 14:12:25,051] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-37 (state.change.logger) 14:14:21 policy-pap | sasl.login.read.timeout.ms = null 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.935749537Z level=info msg="Executing migration" id="add column org_id in query_history_star" 14:14:21 policy-db-migrator | -------------- 14:14:21 kafka | [2024-04-09 14:12:25,051] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-7 (state.change.logger) 14:14:21 policy-pap | sasl.login.refresh.buffer.seconds = 300 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.944260862Z level=info msg="Migration successfully executed" id="add column org_id in query_history_star" duration=8.510695ms 14:14:21 policy-db-migrator | 14:14:21 kafka | [2024-04-09 14:12:25,051] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-22 (state.change.logger) 14:14:21 policy-pap | sasl.login.refresh.min.period.seconds = 60 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.949449526Z level=info msg="Executing migration" id="alter table query_history_star_mig column user_id type to bigint" 14:14:21 policy-db-migrator | -------------- 14:14:21 kafka | [2024-04-09 14:12:25,051] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-29 (state.change.logger) 14:14:21 policy-pap | sasl.login.refresh.window.factor = 0.8 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.949533618Z level=info msg="Migration successfully executed" id="alter table query_history_star_mig column user_id type to bigint" duration=83.162µs 14:14:21 policy-db-migrator | CREATE INDEX IDXTSIDX1 ON pdpstatistics(timeStamp, name, version) 14:14:21 kafka | [2024-04-09 14:12:25,051] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-44 (state.change.logger) 14:14:21 policy-pap | sasl.login.refresh.window.jitter = 0.05 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.952808257Z level=info msg="Executing migration" id="create correlation table v1" 14:14:21 policy-db-migrator | -------------- 14:14:21 kafka | [2024-04-09 14:12:25,051] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-14 (state.change.logger) 14:14:21 policy-pap | sasl.login.retry.backoff.max.ms = 10000 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.953814856Z level=info msg="Migration successfully executed" id="create correlation table v1" duration=1.006369ms 14:14:21 policy-db-migrator | 14:14:21 kafka | [2024-04-09 14:12:25,051] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-23 (state.change.logger) 14:14:21 policy-pap | sasl.login.retry.backoff.ms = 100 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.958246906Z level=info msg="Executing migration" id="add index correlations.uid" 14:14:21 policy-db-migrator | 14:14:21 kafka | [2024-04-09 14:12:25,051] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-38 (state.change.logger) 14:14:21 policy-pap | sasl.mechanism = GSSAPI 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.959408238Z level=info msg="Migration successfully executed" id="add index correlations.uid" duration=1.158561ms 14:14:21 policy-db-migrator | > upgrade 0120-audit_sequence.sql 14:14:21 kafka | [2024-04-09 14:12:25,051] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-8 (state.change.logger) 14:14:21 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.971089301Z level=info msg="Executing migration" id="add index correlations.source_uid" 14:14:21 policy-db-migrator | -------------- 14:14:21 kafka | [2024-04-09 14:12:25,051] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) 14:14:21 policy-pap | sasl.oauthbearer.expected.audience = null 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.972512567Z level=info msg="Migration successfully executed" id="add index correlations.source_uid" duration=1.423706ms 14:14:21 policy-db-migrator | CREATE TABLE IF NOT EXISTS audit_sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) 14:14:21 kafka | [2024-04-09 14:12:25,051] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-45 (state.change.logger) 14:14:21 policy-pap | sasl.oauthbearer.expected.issuer = null 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.976790945Z level=info msg="Executing migration" id="add correlation config column" 14:14:21 kafka | [2024-04-09 14:12:25,051] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-15 (state.change.logger) 14:14:21 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 14:14:21 policy-db-migrator | -------------- 14:14:21 kafka | [2024-04-09 14:12:25,051] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-30 (state.change.logger) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.983789712Z level=info msg="Migration successfully executed" id="add correlation config column" duration=6.997687ms 14:14:21 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 14:14:21 policy-db-migrator | 14:14:21 kafka | [2024-04-09 14:12:25,051] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-0 (state.change.logger) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.987186594Z level=info msg="Executing migration" id="drop index IDX_correlation_uid - v1" 14:14:21 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 14:14:21 policy-db-migrator | -------------- 14:14:21 kafka | [2024-04-09 14:12:25,051] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-35 (state.change.logger) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.988424066Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_uid - v1" duration=1.236382ms 14:14:21 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 14:14:21 policy-db-migrator | INSERT INTO audit_sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM jpapolicyaudit)) 14:14:21 kafka | [2024-04-09 14:12:25,051] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-5 (state.change.logger) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.992933559Z level=info msg="Executing migration" id="drop index IDX_correlation_source_uid - v1" 14:14:21 policy-pap | sasl.oauthbearer.scope.claim.name = scope 14:14:21 policy-db-migrator | -------------- 14:14:21 kafka | [2024-04-09 14:12:25,051] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-20 (state.change.logger) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.995566767Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_source_uid - v1" duration=2.630658ms 14:14:21 policy-pap | sasl.oauthbearer.sub.claim.name = sub 14:14:21 policy-db-migrator | 14:14:21 kafka | [2024-04-09 14:12:25,051] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-27 (state.change.logger) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:54.999359146Z level=info msg="Executing migration" id="Rename table correlation to correlation_tmp_qwerty - v1" 14:14:21 policy-pap | sasl.oauthbearer.token.endpoint.url = null 14:14:21 policy-db-migrator | 14:14:21 kafka | [2024-04-09 14:12:25,051] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-42 (state.change.logger) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.023215234Z level=info msg="Migration successfully executed" id="Rename table correlation to correlation_tmp_qwerty - v1" duration=23.855908ms 14:14:21 policy-pap | security.protocol = PLAINTEXT 14:14:21 policy-db-migrator | > upgrade 0130-statistics_sequence.sql 14:14:21 kafka | [2024-04-09 14:12:25,051] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-12 (state.change.logger) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.036338757Z level=info msg="Executing migration" id="create correlation v2" 14:14:21 policy-pap | security.providers = null 14:14:21 policy-db-migrator | -------------- 14:14:21 kafka | [2024-04-09 14:12:25,051] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-21 (state.change.logger) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.03866815Z level=info msg="Migration successfully executed" id="create correlation v2" duration=2.327053ms 14:14:21 policy-pap | send.buffer.bytes = 131072 14:14:21 policy-db-migrator | CREATE TABLE IF NOT EXISTS statistics_sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) 14:14:21 kafka | [2024-04-09 14:12:25,051] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-36 (state.change.logger) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.044055629Z level=info msg="Executing migration" id="create index IDX_correlation_uid - v2" 14:14:21 policy-pap | session.timeout.ms = 45000 14:14:21 policy-db-migrator | -------------- 14:14:21 kafka | [2024-04-09 14:12:25,051] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-6 (state.change.logger) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.045075128Z level=info msg="Migration successfully executed" id="create index IDX_correlation_uid - v2" duration=1.019699ms 14:14:21 policy-pap | socket.connection.setup.timeout.max.ms = 30000 14:14:21 policy-db-migrator | 14:14:21 kafka | [2024-04-09 14:12:25,051] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-43 (state.change.logger) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.048225496Z level=info msg="Executing migration" id="create index IDX_correlation_source_uid - v2" 14:14:21 policy-pap | socket.connection.setup.timeout.ms = 10000 14:14:21 policy-db-migrator | -------------- 14:14:21 kafka | [2024-04-09 14:12:25,052] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-13 (state.change.logger) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.049325287Z level=info msg="Migration successfully executed" id="create index IDX_correlation_source_uid - v2" duration=1.099391ms 14:14:21 policy-pap | ssl.cipher.suites = null 14:14:21 policy-db-migrator | INSERT INTO statistics_sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM pdpstatistics)) 14:14:21 kafka | [2024-04-09 14:12:25,052] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-28 (state.change.logger) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.053370151Z level=info msg="Executing migration" id="create index IDX_correlation_org_id - v2" 14:14:21 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 14:14:21 policy-db-migrator | -------------- 14:14:21 kafka | [2024-04-09 14:12:25,053] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions HashSet(__consumer_offsets-22, __consumer_offsets-30, __consumer_offsets-25, __consumer_offsets-35, __consumer_offsets-38, __consumer_offsets-13, __consumer_offsets-8, __consumer_offsets-21, __consumer_offsets-4, __consumer_offsets-27, __consumer_offsets-7, __consumer_offsets-9, __consumer_offsets-46, __consumer_offsets-41, __consumer_offsets-33, __consumer_offsets-23, __consumer_offsets-49, __consumer_offsets-47, __consumer_offsets-16, __consumer_offsets-28, __consumer_offsets-31, __consumer_offsets-36, __consumer_offsets-42, __consumer_offsets-3, __consumer_offsets-18, __consumer_offsets-37, policy-pdp-pap-0, __consumer_offsets-15, __consumer_offsets-24, __consumer_offsets-17, __consumer_offsets-48, __consumer_offsets-19, __consumer_offsets-11, __consumer_offsets-2, __consumer_offsets-43, __consumer_offsets-6, __consumer_offsets-14, __consumer_offsets-20, __consumer_offsets-0, __consumer_offsets-44, __consumer_offsets-39, __consumer_offsets-12, __consumer_offsets-45, __consumer_offsets-1, __consumer_offsets-5, __consumer_offsets-26, __consumer_offsets-29, __consumer_offsets-34, __consumer_offsets-10, __consumer_offsets-32, __consumer_offsets-40) (kafka.server.ReplicaFetcherManager) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.055518761Z level=info msg="Migration successfully executed" id="create index IDX_correlation_org_id - v2" duration=2.14867ms 14:14:21 policy-pap | ssl.endpoint.identification.algorithm = https 14:14:21 policy-db-migrator | 14:14:21 kafka | [2024-04-09 14:12:25,053] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 1 from controller 1 epoch 1 as part of the become-leader transition for 51 partitions (state.change.logger) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.058751381Z level=info msg="Executing migration" id="copy correlation v1 to v2" 14:14:21 policy-pap | ssl.engine.factory.class = null 14:14:21 policy-db-migrator | -------------- 14:14:21 kafka | [2024-04-09 14:12:25,109] INFO [LogLoader partition=__consumer_offsets-3, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.059025346Z level=info msg="Migration successfully executed" id="copy correlation v1 to v2" duration=274.155µs 14:14:21 policy-pap | ssl.key.password = null 14:14:21 policy-db-migrator | TRUNCATE TABLE sequence 14:14:21 kafka | [2024-04-09 14:12:25,128] INFO Created log for partition __consumer_offsets-3 in /var/lib/kafka/data/__consumer_offsets-3 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.064313064Z level=info msg="Executing migration" id="drop correlation_tmp_qwerty" 14:14:21 policy-pap | ssl.keymanager.algorithm = SunX509 14:14:21 policy-db-migrator | -------------- 14:14:21 kafka | [2024-04-09 14:12:25,130] INFO [Partition __consumer_offsets-3 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-3 (kafka.cluster.Partition) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.065051637Z level=info msg="Migration successfully executed" id="drop correlation_tmp_qwerty" duration=735.863µs 14:14:21 policy-pap | ssl.keystore.certificate.chain = null 14:14:21 policy-db-migrator | 14:14:21 kafka | [2024-04-09 14:12:25,131] INFO [Partition __consumer_offsets-3 broker=1] Log loaded for partition __consumer_offsets-3 with initial high watermark 0 (kafka.cluster.Partition) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.071529927Z level=info msg="Executing migration" id="add provisioning column" 14:14:21 policy-pap | ssl.keystore.key = null 14:14:21 policy-db-migrator | 14:14:21 kafka | [2024-04-09 14:12:25,132] INFO [Broker id=1] Leader __consumer_offsets-3 with topic id Some(JIxyITR5QGSmI5P2pGX22A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.079541925Z level=info msg="Migration successfully executed" id="add provisioning column" duration=8.010618ms 14:14:21 policy-pap | ssl.keystore.location = null 14:14:21 policy-db-migrator | > upgrade 0100-pdpstatistics.sql 14:14:21 kafka | [2024-04-09 14:12:25,151] INFO [LogLoader partition=__consumer_offsets-18, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.082902297Z level=info msg="Executing migration" id="create entity_events table" 14:14:21 policy-db-migrator | -------------- 14:14:21 kafka | [2024-04-09 14:12:25,152] INFO Created log for partition __consumer_offsets-18 in /var/lib/kafka/data/__consumer_offsets-18 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:14:21 policy-pap | ssl.keystore.password = null 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.083502088Z level=info msg="Migration successfully executed" id="create entity_events table" duration=599.931µs 14:14:21 policy-db-migrator | DROP INDEX IDXTSIDX1 ON pdpstatistics 14:14:21 kafka | [2024-04-09 14:12:25,152] INFO [Partition __consumer_offsets-18 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-18 (kafka.cluster.Partition) 14:14:21 policy-pap | ssl.keystore.type = JKS 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.087264718Z level=info msg="Executing migration" id="create dashboard public config v1" 14:14:21 policy-db-migrator | -------------- 14:14:21 kafka | [2024-04-09 14:12:25,152] INFO [Partition __consumer_offsets-18 broker=1] Log loaded for partition __consumer_offsets-18 with initial high watermark 0 (kafka.cluster.Partition) 14:14:21 policy-pap | ssl.protocol = TLSv1.3 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.088313617Z level=info msg="Migration successfully executed" id="create dashboard public config v1" duration=1.047939ms 14:14:21 policy-db-migrator | 14:14:21 kafka | [2024-04-09 14:12:25,152] INFO [Broker id=1] Leader __consumer_offsets-18 with topic id Some(JIxyITR5QGSmI5P2pGX22A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:14:21 policy-pap | ssl.provider = null 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.09174274Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v1" 14:14:21 policy-db-migrator | -------------- 14:14:21 kafka | [2024-04-09 14:12:25,159] INFO [LogLoader partition=__consumer_offsets-41, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:14:21 policy-pap | ssl.secure.random.implementation = null 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.092169818Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index UQE_dashboard_public_config_uid - v1" 14:14:21 policy-db-migrator | DROP TABLE pdpstatistics 14:14:21 kafka | [2024-04-09 14:12:25,160] INFO Created log for partition __consumer_offsets-41 in /var/lib/kafka/data/__consumer_offsets-41 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:14:21 policy-pap | ssl.trustmanager.algorithm = PKIX 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.095333767Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 14:14:21 policy-db-migrator | -------------- 14:14:21 kafka | [2024-04-09 14:12:25,160] INFO [Partition __consumer_offsets-41 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-41 (kafka.cluster.Partition) 14:14:21 policy-pap | ssl.truststore.certificates = null 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.095757585Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 14:14:21 policy-db-migrator | 14:14:21 kafka | [2024-04-09 14:12:25,160] INFO [Partition __consumer_offsets-41 broker=1] Log loaded for partition __consumer_offsets-41 with initial high watermark 0 (kafka.cluster.Partition) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.101284787Z level=info msg="Executing migration" id="Drop old dashboard public config table" 14:14:21 policy-db-migrator | 14:14:21 kafka | [2024-04-09 14:12:25,160] INFO [Broker id=1] Leader __consumer_offsets-41 with topic id Some(JIxyITR5QGSmI5P2pGX22A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:14:21 policy-pap | ssl.truststore.location = null 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.102664812Z level=info msg="Migration successfully executed" id="Drop old dashboard public config table" duration=1.382825ms 14:14:21 policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql 14:14:21 kafka | [2024-04-09 14:12:25,168] INFO [LogLoader partition=__consumer_offsets-10, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:14:21 policy-pap | ssl.truststore.password = null 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.106965772Z level=info msg="Executing migration" id="recreate dashboard public config v1" 14:14:21 policy-db-migrator | -------------- 14:14:21 kafka | [2024-04-09 14:12:25,168] INFO Created log for partition __consumer_offsets-10 in /var/lib/kafka/data/__consumer_offsets-10 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:14:21 policy-pap | ssl.truststore.type = JKS 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.108199385Z level=info msg="Migration successfully executed" id="recreate dashboard public config v1" duration=1.234543ms 14:14:21 policy-db-migrator | DROP TABLE jpapdpstatistics_enginestats 14:14:21 kafka | [2024-04-09 14:12:25,168] INFO [Partition __consumer_offsets-10 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-10 (kafka.cluster.Partition) 14:14:21 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.112494674Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v1" 14:14:21 policy-db-migrator | -------------- 14:14:21 kafka | [2024-04-09 14:12:25,168] INFO [Partition __consumer_offsets-10 broker=1] Log loaded for partition __consumer_offsets-10 with initial high watermark 0 (kafka.cluster.Partition) 14:14:21 policy-pap | 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.113723897Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v1" duration=1.230143ms 14:14:21 policy-db-migrator | 14:14:21 policy-pap | [2024-04-09T14:12:24.291+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 14:14:21 kafka | [2024-04-09 14:12:25,168] INFO [Broker id=1] Leader __consumer_offsets-10 with topic id Some(JIxyITR5QGSmI5P2pGX22A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.11769374Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 14:14:21 policy-db-migrator | 14:14:21 policy-pap | [2024-04-09T14:12:24.291+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 14:14:21 kafka | [2024-04-09 14:12:25,174] INFO [LogLoader partition=__consumer_offsets-33, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.119104566Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" duration=1.411076ms 14:14:21 policy-db-migrator | > upgrade 0120-statistics_sequence.sql 14:14:21 policy-pap | [2024-04-09T14:12:24.291+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1712671944291 14:14:21 kafka | [2024-04-09 14:12:25,174] INFO Created log for partition __consumer_offsets-33 in /var/lib/kafka/data/__consumer_offsets-33 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.123166581Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v2" 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-pap | [2024-04-09T14:12:24.291+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-8886bf5a-38da-4c7c-af7d-ca09814a22ad-3, groupId=8886bf5a-38da-4c7c-af7d-ca09814a22ad] Subscribed to topic(s): policy-pdp-pap 14:14:21 kafka | [2024-04-09 14:12:25,175] INFO [Partition __consumer_offsets-33 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-33 (kafka.cluster.Partition) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.124763001Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_public_config_uid - v2" duration=1.59537ms 14:14:21 policy-db-migrator | DROP TABLE statistics_sequence 14:14:21 policy-pap | [2024-04-09T14:12:24.292+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Message Dispatcher 14:14:21 kafka | [2024-04-09 14:12:25,175] INFO [Partition __consumer_offsets-33 broker=1] Log loaded for partition __consumer_offsets-33 with initial high watermark 0 (kafka.cluster.Partition) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.12958059Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" 14:14:21 policy-db-migrator | -------------- 14:14:21 policy-pap | [2024-04-09T14:12:24.292+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=6736d7e9-6714-4f8e-b97c-2edf4d38cb1b, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@2ea0161f 14:14:21 kafka | [2024-04-09 14:12:25,175] INFO [Broker id=1] Leader __consumer_offsets-33 with topic id Some(JIxyITR5QGSmI5P2pGX22A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.1312513Z level=info msg="Migration successfully executed" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.6673ms 14:14:21 policy-pap | [2024-04-09T14:12:24.292+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=6736d7e9-6714-4f8e-b97c-2edf4d38cb1b, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 14:14:21 kafka | [2024-04-09 14:12:25,181] INFO [LogLoader partition=__consumer_offsets-48, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:14:21 policy-db-migrator | 14:14:21 kafka | [2024-04-09 14:12:25,181] INFO Created log for partition __consumer_offsets-48 in /var/lib/kafka/data/__consumer_offsets-48 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:14:21 policy-pap | [2024-04-09T14:12:24.292+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.135380577Z level=info msg="Executing migration" id="Drop public config table" 14:14:21 policy-db-migrator | policyadmin: OK: upgrade (1300) 14:14:21 kafka | [2024-04-09 14:12:25,181] INFO [Partition __consumer_offsets-48 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-48 (kafka.cluster.Partition) 14:14:21 policy-pap | allow.auto.create.topics = true 14:14:21 policy-db-migrator | name version 14:14:21 kafka | [2024-04-09 14:12:25,181] INFO [Partition __consumer_offsets-48 broker=1] Log loaded for partition __consumer_offsets-48 with initial high watermark 0 (kafka.cluster.Partition) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.136264193Z level=info msg="Migration successfully executed" id="Drop public config table" duration=883.916µs 14:14:21 policy-pap | auto.commit.interval.ms = 5000 14:14:21 policy-db-migrator | policyadmin 1300 14:14:21 kafka | [2024-04-09 14:12:25,182] INFO [Broker id=1] Leader __consumer_offsets-48 with topic id Some(JIxyITR5QGSmI5P2pGX22A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.140498541Z level=info msg="Executing migration" id="Recreate dashboard public config v2" 14:14:21 policy-pap | auto.include.jmx.reporter = true 14:14:21 policy-db-migrator | ID script operation from_version to_version tag success atTime 14:14:21 kafka | [2024-04-09 14:12:25,192] INFO [LogLoader partition=__consumer_offsets-19, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.14207118Z level=info msg="Migration successfully executed" id="Recreate dashboard public config v2" duration=1.573979ms 14:14:21 policy-pap | auto.offset.reset = latest 14:14:21 policy-db-migrator | 1 0100-jpapdpgroup_properties.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:54 14:14:21 kafka | [2024-04-09 14:12:25,193] INFO Created log for partition __consumer_offsets-19 in /var/lib/kafka/data/__consumer_offsets-19 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.148069581Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v2" 14:14:21 policy-pap | bootstrap.servers = [kafka:9092] 14:14:21 policy-db-migrator | 2 0110-jpapdpstatistics_enginestats.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:54 14:14:21 kafka | [2024-04-09 14:12:25,193] INFO [Partition __consumer_offsets-19 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-19 (kafka.cluster.Partition) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.150247011Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v2" duration=2.17847ms 14:14:21 policy-pap | check.crcs = true 14:14:21 policy-db-migrator | 3 0120-jpapdpsubgroup_policies.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:54 14:14:21 kafka | [2024-04-09 14:12:25,193] INFO [Partition __consumer_offsets-19 broker=1] Log loaded for partition __consumer_offsets-19 with initial high watermark 0 (kafka.cluster.Partition) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.154730674Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" 14:14:21 policy-pap | client.dns.lookup = use_all_dns_ips 14:14:21 policy-db-migrator | 4 0130-jpapdpsubgroup_properties.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:54 14:14:21 kafka | [2024-04-09 14:12:25,194] INFO [Broker id=1] Leader __consumer_offsets-19 with topic id Some(JIxyITR5QGSmI5P2pGX22A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.155998598Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.268524ms 14:14:21 policy-pap | client.id = consumer-policy-pap-4 14:14:21 policy-db-migrator | 5 0140-jpapdpsubgroup_supportedpolicytypes.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:54 14:14:21 kafka | [2024-04-09 14:12:25,207] INFO [LogLoader partition=__consumer_offsets-34, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:14:21 policy-pap | client.rack = 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.158912192Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_access_token - v2" 14:14:21 policy-db-migrator | 6 0150-jpatoscacapabilityassignment_attributes.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:54 14:14:21 kafka | [2024-04-09 14:12:25,208] INFO Created log for partition __consumer_offsets-34 in /var/lib/kafka/data/__consumer_offsets-34 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:14:21 policy-pap | connections.max.idle.ms = 540000 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.160038762Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_access_token - v2" duration=1.12706ms 14:14:21 policy-db-migrator | 7 0160-jpatoscacapabilityassignment_metadata.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:54 14:14:21 kafka | [2024-04-09 14:12:25,208] INFO [Partition __consumer_offsets-34 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-34 (kafka.cluster.Partition) 14:14:21 policy-pap | default.api.timeout.ms = 60000 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.165272469Z level=info msg="Executing migration" id="Rename table dashboard_public_config to dashboard_public - v2" 14:14:21 policy-db-migrator | 8 0170-jpatoscacapabilityassignment_occurrences.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:54 14:14:21 kafka | [2024-04-09 14:12:25,208] INFO [Partition __consumer_offsets-34 broker=1] Log loaded for partition __consumer_offsets-34 with initial high watermark 0 (kafka.cluster.Partition) 14:14:21 policy-pap | enable.auto.commit = true 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.19021667Z level=info msg="Migration successfully executed" id="Rename table dashboard_public_config to dashboard_public - v2" duration=24.940481ms 14:14:21 policy-db-migrator | 9 0180-jpatoscacapabilityassignment_properties.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:54 14:14:21 kafka | [2024-04-09 14:12:25,208] INFO [Broker id=1] Leader __consumer_offsets-34 with topic id Some(JIxyITR5QGSmI5P2pGX22A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:14:21 policy-pap | exclude.internal.topics = true 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.195919305Z level=info msg="Executing migration" id="add annotations_enabled column" 14:14:21 policy-db-migrator | 10 0190-jpatoscacapabilitytype_metadata.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:54 14:14:21 kafka | [2024-04-09 14:12:25,227] INFO [LogLoader partition=__consumer_offsets-4, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:14:21 policy-pap | fetch.max.bytes = 52428800 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.201967047Z level=info msg="Migration successfully executed" id="add annotations_enabled column" duration=6.047382ms 14:14:21 policy-db-migrator | 11 0200-jpatoscacapabilitytype_properties.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:54 14:14:21 kafka | [2024-04-09 14:12:25,228] INFO Created log for partition __consumer_offsets-4 in /var/lib/kafka/data/__consumer_offsets-4 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:14:21 policy-pap | fetch.max.wait.ms = 500 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.204994863Z level=info msg="Executing migration" id="add time_selection_enabled column" 14:14:21 policy-db-migrator | 12 0210-jpatoscadatatype_constraints.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:54 14:14:21 kafka | [2024-04-09 14:12:25,228] INFO [Partition __consumer_offsets-4 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-4 (kafka.cluster.Partition) 14:14:21 policy-pap | fetch.min.bytes = 1 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.213667753Z level=info msg="Migration successfully executed" id="add time_selection_enabled column" duration=8.66695ms 14:14:21 policy-db-migrator | 13 0220-jpatoscadatatype_metadata.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:54 14:14:21 kafka | [2024-04-09 14:12:25,228] INFO [Partition __consumer_offsets-4 broker=1] Log loaded for partition __consumer_offsets-4 with initial high watermark 0 (kafka.cluster.Partition) 14:14:21 policy-pap | group.id = policy-pap 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.21672823Z level=info msg="Executing migration" id="delete orphaned public dashboards" 14:14:21 policy-db-migrator | 14 0230-jpatoscadatatype_properties.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:54 14:14:21 kafka | [2024-04-09 14:12:25,228] INFO [Broker id=1] Leader __consumer_offsets-4 with topic id Some(JIxyITR5QGSmI5P2pGX22A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:14:21 policy-pap | group.instance.id = null 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.216969764Z level=info msg="Migration successfully executed" id="delete orphaned public dashboards" duration=241.724µs 14:14:21 policy-db-migrator | 15 0240-jpatoscanodetemplate_metadata.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:54 14:14:21 kafka | [2024-04-09 14:12:25,236] INFO [LogLoader partition=__consumer_offsets-11, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:14:21 policy-pap | heartbeat.interval.ms = 3000 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.220958038Z level=info msg="Executing migration" id="add share column" 14:14:21 policy-db-migrator | 16 0250-jpatoscanodetemplate_properties.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:54 14:14:21 kafka | [2024-04-09 14:12:25,238] INFO Created log for partition __consumer_offsets-11 in /var/lib/kafka/data/__consumer_offsets-11 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:14:21 policy-pap | interceptor.classes = [] 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.229939624Z level=info msg="Migration successfully executed" id="add share column" duration=8.979086ms 14:14:21 policy-db-migrator | 17 0260-jpatoscanodetype_metadata.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:54 14:14:21 kafka | [2024-04-09 14:12:25,239] INFO [Partition __consumer_offsets-11 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-11 (kafka.cluster.Partition) 14:14:21 policy-pap | internal.leave.group.on.close = true 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.233043281Z level=info msg="Executing migration" id="backfill empty share column fields with default of public" 14:14:21 policy-db-migrator | 18 0270-jpatoscanodetype_properties.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:54 14:14:21 kafka | [2024-04-09 14:12:25,239] INFO [Partition __consumer_offsets-11 broker=1] Log loaded for partition __consumer_offsets-11 with initial high watermark 0 (kafka.cluster.Partition) 14:14:21 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.233250225Z level=info msg="Migration successfully executed" id="backfill empty share column fields with default of public" duration=207.294µs 14:14:21 kafka | [2024-04-09 14:12:25,239] INFO [Broker id=1] Leader __consumer_offsets-11 with topic id Some(JIxyITR5QGSmI5P2pGX22A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:14:21 policy-pap | isolation.level = read_uncommitted 14:14:21 policy-db-migrator | 19 0280-jpatoscapolicy_metadata.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:54 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.236090828Z level=info msg="Executing migration" id="create file table" 14:14:21 kafka | [2024-04-09 14:12:25,249] INFO [LogLoader partition=__consumer_offsets-26, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:14:21 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 14:14:21 policy-db-migrator | 20 0290-jpatoscapolicy_properties.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:54 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.237076306Z level=info msg="Migration successfully executed" id="create file table" duration=984.988µs 14:14:21 kafka | [2024-04-09 14:12:25,250] INFO Created log for partition __consumer_offsets-26 in /var/lib/kafka/data/__consumer_offsets-26 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:14:21 policy-pap | max.partition.fetch.bytes = 1048576 14:14:21 policy-db-migrator | 21 0300-jpatoscapolicy_targets.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:54 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.240191543Z level=info msg="Executing migration" id="file table idx: path natural pk" 14:14:21 kafka | [2024-04-09 14:12:25,250] INFO [Partition __consumer_offsets-26 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-26 (kafka.cluster.Partition) 14:14:21 policy-pap | max.poll.interval.ms = 300000 14:14:21 policy-db-migrator | 22 0310-jpatoscapolicytype_metadata.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:54 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.241321324Z level=info msg="Migration successfully executed" id="file table idx: path natural pk" duration=1.127491ms 14:14:21 kafka | [2024-04-09 14:12:25,250] INFO [Partition __consumer_offsets-26 broker=1] Log loaded for partition __consumer_offsets-26 with initial high watermark 0 (kafka.cluster.Partition) 14:14:21 policy-pap | max.poll.records = 500 14:14:21 policy-db-migrator | 23 0320-jpatoscapolicytype_properties.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:55 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.245234317Z level=info msg="Executing migration" id="file table idx: parent_folder_path_hash fast folder retrieval" 14:14:21 policy-pap | metadata.max.age.ms = 300000 14:14:21 policy-db-migrator | 24 0330-jpatoscapolicytype_targets.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:55 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.246392028Z level=info msg="Migration successfully executed" id="file table idx: parent_folder_path_hash fast folder retrieval" duration=1.158341ms 14:14:21 kafka | [2024-04-09 14:12:25,250] INFO [Broker id=1] Leader __consumer_offsets-26 with topic id Some(JIxyITR5QGSmI5P2pGX22A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:14:21 policy-pap | metric.reporters = [] 14:14:21 policy-db-migrator | 25 0340-jpatoscapolicytype_triggers.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:55 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.249297932Z level=info msg="Executing migration" id="create file_meta table" 14:14:21 kafka | [2024-04-09 14:12:25,263] INFO [LogLoader partition=__consumer_offsets-49, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:14:21 policy-pap | metrics.num.samples = 2 14:14:21 policy-db-migrator | 26 0350-jpatoscaproperty_constraints.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:55 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.25029423Z level=info msg="Migration successfully executed" id="create file_meta table" duration=995.578µs 14:14:21 kafka | [2024-04-09 14:12:25,264] INFO Created log for partition __consumer_offsets-49 in /var/lib/kafka/data/__consumer_offsets-49 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:14:21 policy-pap | metrics.recording.level = INFO 14:14:21 policy-db-migrator | 27 0360-jpatoscaproperty_metadata.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:55 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.253469139Z level=info msg="Executing migration" id="file table idx: path key" 14:14:21 kafka | [2024-04-09 14:12:25,264] INFO [Partition __consumer_offsets-49 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-49 (kafka.cluster.Partition) 14:14:21 policy-pap | metrics.sample.window.ms = 30000 14:14:21 policy-db-migrator | 28 0370-jpatoscarelationshiptype_metadata.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:55 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.254830194Z level=info msg="Migration successfully executed" id="file table idx: path key" duration=1.360695ms 14:14:21 kafka | [2024-04-09 14:12:25,265] INFO [Partition __consumer_offsets-49 broker=1] Log loaded for partition __consumer_offsets-49 with initial high watermark 0 (kafka.cluster.Partition) 14:14:21 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 14:14:21 policy-db-migrator | 29 0380-jpatoscarelationshiptype_properties.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:55 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.262877823Z level=info msg="Executing migration" id="set path collation in file table" 14:14:21 kafka | [2024-04-09 14:12:25,265] INFO [Broker id=1] Leader __consumer_offsets-49 with topic id Some(JIxyITR5QGSmI5P2pGX22A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:14:21 policy-pap | receive.buffer.bytes = 65536 14:14:21 policy-db-migrator | 30 0390-jpatoscarequirement_metadata.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:55 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.262953074Z level=info msg="Migration successfully executed" id="set path collation in file table" duration=76.311µs 14:14:21 kafka | [2024-04-09 14:12:25,277] INFO [LogLoader partition=__consumer_offsets-39, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:14:21 policy-pap | reconnect.backoff.max.ms = 1000 14:14:21 policy-db-migrator | 31 0400-jpatoscarequirement_occurrences.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:55 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.272225605Z level=info msg="Executing migration" id="migrate contents column to mediumblob for MySQL" 14:14:21 kafka | [2024-04-09 14:12:25,278] INFO Created log for partition __consumer_offsets-39 in /var/lib/kafka/data/__consumer_offsets-39 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:14:21 policy-pap | reconnect.backoff.ms = 50 14:14:21 policy-db-migrator | 32 0410-jpatoscarequirement_properties.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:55 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.272369998Z level=info msg="Migration successfully executed" id="migrate contents column to mediumblob for MySQL" duration=148.053µs 14:14:21 kafka | [2024-04-09 14:12:25,278] INFO [Partition __consumer_offsets-39 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-39 (kafka.cluster.Partition) 14:14:21 policy-pap | request.timeout.ms = 30000 14:14:21 policy-db-migrator | 33 0420-jpatoscaservicetemplate_metadata.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:55 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.275252121Z level=info msg="Executing migration" id="managed permissions migration" 14:14:21 kafka | [2024-04-09 14:12:25,279] INFO [Partition __consumer_offsets-39 broker=1] Log loaded for partition __consumer_offsets-39 with initial high watermark 0 (kafka.cluster.Partition) 14:14:21 policy-pap | retry.backoff.ms = 100 14:14:21 policy-db-migrator | 34 0430-jpatoscatopologytemplate_inputs.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:55 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.275910583Z level=info msg="Migration successfully executed" id="managed permissions migration" duration=663.532µs 14:14:21 kafka | [2024-04-09 14:12:25,279] INFO [Broker id=1] Leader __consumer_offsets-39 with topic id Some(JIxyITR5QGSmI5P2pGX22A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:14:21 policy-pap | sasl.client.callback.handler.class = null 14:14:21 policy-db-migrator | 35 0440-pdpgroup_pdpsubgroup.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:55 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.279158443Z level=info msg="Executing migration" id="managed folder permissions alert actions migration" 14:14:21 kafka | [2024-04-09 14:12:25,286] INFO [LogLoader partition=__consumer_offsets-9, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:14:21 policy-pap | sasl.jaas.config = null 14:14:21 policy-db-migrator | 36 0450-pdpgroup.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:55 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.279354807Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions migration" duration=196.464µs 14:14:21 kafka | [2024-04-09 14:12:25,286] INFO Created log for partition __consumer_offsets-9 in /var/lib/kafka/data/__consumer_offsets-9 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:14:21 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 14:14:21 policy-db-migrator | 37 0460-pdppolicystatus.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:55 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.283497644Z level=info msg="Executing migration" id="RBAC action name migrator" 14:14:21 kafka | [2024-04-09 14:12:25,286] INFO [Partition __consumer_offsets-9 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-9 (kafka.cluster.Partition) 14:14:21 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 14:14:21 policy-db-migrator | 38 0470-pdp.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:55 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.285552092Z level=info msg="Migration successfully executed" id="RBAC action name migrator" duration=2.054357ms 14:14:21 kafka | [2024-04-09 14:12:25,286] INFO [Partition __consumer_offsets-9 broker=1] Log loaded for partition __consumer_offsets-9 with initial high watermark 0 (kafka.cluster.Partition) 14:14:21 policy-pap | sasl.kerberos.service.name = null 14:14:21 policy-db-migrator | 39 0480-pdpstatistics.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:55 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.289828071Z level=info msg="Executing migration" id="Add UID column to playlist" 14:14:21 kafka | [2024-04-09 14:12:25,286] INFO [Broker id=1] Leader __consumer_offsets-9 with topic id Some(JIxyITR5QGSmI5P2pGX22A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:14:21 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.302201729Z level=info msg="Migration successfully executed" id="Add UID column to playlist" duration=12.374139ms 14:14:21 policy-db-migrator | 40 0490-pdpsubgroup_pdp.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:55 14:14:21 kafka | [2024-04-09 14:12:25,294] INFO [LogLoader partition=__consumer_offsets-24, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:14:21 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.306989488Z level=info msg="Executing migration" id="Update uid column values in playlist" 14:14:21 policy-db-migrator | 41 0500-pdpsubgroup.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:55 14:14:21 kafka | [2024-04-09 14:12:25,295] INFO Created log for partition __consumer_offsets-24 in /var/lib/kafka/data/__consumer_offsets-24 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:14:21 policy-pap | sasl.login.callback.handler.class = null 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.307164481Z level=info msg="Migration successfully executed" id="Update uid column values in playlist" duration=176.453µs 14:14:21 policy-db-migrator | 42 0510-toscacapabilityassignment.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:55 14:14:21 kafka | [2024-04-09 14:12:25,295] INFO [Partition __consumer_offsets-24 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-24 (kafka.cluster.Partition) 14:14:21 policy-pap | sasl.login.class = null 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.313891955Z level=info msg="Executing migration" id="Add index for uid in playlist" 14:14:21 policy-db-migrator | 43 0520-toscacapabilityassignments.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:55 14:14:21 kafka | [2024-04-09 14:12:25,295] INFO [Partition __consumer_offsets-24 broker=1] Log loaded for partition __consumer_offsets-24 with initial high watermark 0 (kafka.cluster.Partition) 14:14:21 policy-pap | sasl.login.connect.timeout.ms = null 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.315140158Z level=info msg="Migration successfully executed" id="Add index for uid in playlist" duration=1.248233ms 14:14:21 policy-db-migrator | 44 0530-toscacapabilityassignments_toscacapabilityassignment.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:55 14:14:21 kafka | [2024-04-09 14:12:25,295] INFO [Broker id=1] Leader __consumer_offsets-24 with topic id Some(JIxyITR5QGSmI5P2pGX22A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:14:21 policy-pap | sasl.login.read.timeout.ms = null 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.318027482Z level=info msg="Executing migration" id="update group index for alert rules" 14:14:21 policy-db-migrator | 45 0540-toscacapabilitytype.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:55 14:14:21 kafka | [2024-04-09 14:12:25,305] INFO [LogLoader partition=__consumer_offsets-31, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:14:21 policy-pap | sasl.login.refresh.buffer.seconds = 300 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.318429329Z level=info msg="Migration successfully executed" id="update group index for alert rules" duration=402.787µs 14:14:21 policy-db-migrator | 46 0550-toscacapabilitytypes.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:56 14:14:21 kafka | [2024-04-09 14:12:25,305] INFO Created log for partition __consumer_offsets-31 in /var/lib/kafka/data/__consumer_offsets-31 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:14:21 policy-pap | sasl.login.refresh.min.period.seconds = 60 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.322067246Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated migration" 14:14:21 policy-db-migrator | 47 0560-toscacapabilitytypes_toscacapabilitytype.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:56 14:14:21 kafka | [2024-04-09 14:12:25,305] INFO [Partition __consumer_offsets-31 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-31 (kafka.cluster.Partition) 14:14:21 policy-pap | sasl.login.refresh.window.factor = 0.8 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.322285Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated migration" duration=219.624µs 14:14:21 policy-db-migrator | 48 0570-toscadatatype.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:56 14:14:21 kafka | [2024-04-09 14:12:25,306] INFO [Partition __consumer_offsets-31 broker=1] Log loaded for partition __consumer_offsets-31 with initial high watermark 0 (kafka.cluster.Partition) 14:14:21 policy-pap | sasl.login.refresh.window.jitter = 0.05 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.325393768Z level=info msg="Executing migration" id="admin only folder/dashboard permission" 14:14:21 policy-db-migrator | 49 0580-toscadatatypes.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:56 14:14:21 kafka | [2024-04-09 14:12:25,306] INFO [Broker id=1] Leader __consumer_offsets-31 with topic id Some(JIxyITR5QGSmI5P2pGX22A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:14:21 policy-pap | sasl.login.retry.backoff.max.ms = 10000 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.32604528Z level=info msg="Migration successfully executed" id="admin only folder/dashboard permission" duration=651.362µs 14:14:21 policy-db-migrator | 50 0590-toscadatatypes_toscadatatype.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:56 14:14:21 kafka | [2024-04-09 14:12:25,312] INFO [LogLoader partition=__consumer_offsets-46, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:14:21 policy-pap | sasl.login.retry.backoff.ms = 100 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.330445911Z level=info msg="Executing migration" id="add action column to seed_assignment" 14:14:21 policy-db-migrator | 51 0600-toscanodetemplate.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:56 14:14:21 kafka | [2024-04-09 14:12:25,313] INFO Created log for partition __consumer_offsets-46 in /var/lib/kafka/data/__consumer_offsets-46 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:14:21 policy-pap | sasl.mechanism = GSSAPI 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.340067479Z level=info msg="Migration successfully executed" id="add action column to seed_assignment" duration=9.618418ms 14:14:21 policy-db-migrator | 52 0610-toscanodetemplates.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:56 14:14:21 kafka | [2024-04-09 14:12:25,313] INFO [Partition __consumer_offsets-46 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-46 (kafka.cluster.Partition) 14:14:21 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.346374115Z level=info msg="Executing migration" id="add scope column to seed_assignment" 14:14:21 policy-db-migrator | 53 0620-toscanodetemplates_toscanodetemplate.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:56 14:14:21 kafka | [2024-04-09 14:12:25,313] INFO [Partition __consumer_offsets-46 broker=1] Log loaded for partition __consumer_offsets-46 with initial high watermark 0 (kafka.cluster.Partition) 14:14:21 policy-pap | sasl.oauthbearer.expected.audience = null 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.356218437Z level=info msg="Migration successfully executed" id="add scope column to seed_assignment" duration=9.842102ms 14:14:21 policy-db-migrator | 54 0630-toscanodetype.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:56 14:14:21 kafka | [2024-04-09 14:12:25,313] INFO [Broker id=1] Leader __consumer_offsets-46 with topic id Some(JIxyITR5QGSmI5P2pGX22A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:14:21 policy-pap | sasl.oauthbearer.expected.issuer = null 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.35959928Z level=info msg="Executing migration" id="remove unique index builtin_role_role_name before nullable update" 14:14:21 policy-db-migrator | 55 0640-toscanodetypes.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:56 14:14:21 kafka | [2024-04-09 14:12:25,321] INFO [LogLoader partition=__consumer_offsets-1, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:14:21 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.36068917Z level=info msg="Migration successfully executed" id="remove unique index builtin_role_role_name before nullable update" duration=1.09002ms 14:14:21 policy-db-migrator | 56 0650-toscanodetypes_toscanodetype.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:56 14:14:21 kafka | [2024-04-09 14:12:25,321] INFO Created log for partition __consumer_offsets-1 in /var/lib/kafka/data/__consumer_offsets-1 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:14:21 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 14:14:21 policy-db-migrator | 57 0660-toscaparameter.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:56 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.364728544Z level=info msg="Executing migration" id="update seed_assignment role_name column to nullable" 14:14:21 kafka | [2024-04-09 14:12:25,321] INFO [Partition __consumer_offsets-1 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-1 (kafka.cluster.Partition) 14:14:21 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.460706218Z level=info msg="Migration successfully executed" id="update seed_assignment role_name column to nullable" duration=95.972934ms 14:14:21 kafka | [2024-04-09 14:12:25,321] INFO [Partition __consumer_offsets-1 broker=1] Log loaded for partition __consumer_offsets-1 with initial high watermark 0 (kafka.cluster.Partition) 14:14:21 policy-db-migrator | 58 0670-toscapolicies.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:56 14:14:21 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.468991271Z level=info msg="Executing migration" id="add unique index builtin_role_name back" 14:14:21 policy-db-migrator | 59 0680-toscapolicies_toscapolicy.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:56 14:14:21 kafka | [2024-04-09 14:12:25,321] INFO [Broker id=1] Leader __consumer_offsets-1 with topic id Some(JIxyITR5QGSmI5P2pGX22A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:14:21 policy-pap | sasl.oauthbearer.scope.claim.name = scope 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.47000591Z level=info msg="Migration successfully executed" id="add unique index builtin_role_name back" duration=1.016219ms 14:14:21 policy-db-migrator | 60 0690-toscapolicy.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:56 14:14:21 kafka | [2024-04-09 14:12:25,328] INFO [LogLoader partition=__consumer_offsets-16, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:14:21 policy-pap | sasl.oauthbearer.sub.claim.name = sub 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.504849314Z level=info msg="Executing migration" id="add unique index builtin_role_action_scope" 14:14:21 policy-db-migrator | 61 0700-toscapolicytype.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:56 14:14:21 policy-db-migrator | 62 0710-toscapolicytypes.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:56 14:14:21 policy-pap | sasl.oauthbearer.token.endpoint.url = null 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.506981013Z level=info msg="Migration successfully executed" id="add unique index builtin_role_action_scope" duration=2.13346ms 14:14:21 policy-db-migrator | 63 0720-toscapolicytypes_toscapolicytype.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:56 14:14:21 policy-pap | security.protocol = PLAINTEXT 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.513122236Z level=info msg="Executing migration" id="add primary key to seed_assigment" 14:14:21 policy-db-migrator | 64 0730-toscaproperty.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:56 14:14:21 policy-pap | security.providers = null 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.541509221Z level=info msg="Migration successfully executed" id="add primary key to seed_assigment" duration=28.383555ms 14:14:21 policy-db-migrator | 65 0740-toscarelationshiptype.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:56 14:14:21 policy-pap | send.buffer.bytes = 131072 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.549076021Z level=info msg="Executing migration" id="add origin column to seed_assignment" 14:14:21 policy-db-migrator | 66 0750-toscarelationshiptypes.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:56 14:14:21 policy-pap | session.timeout.ms = 45000 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.555492709Z level=info msg="Migration successfully executed" id="add origin column to seed_assignment" duration=6.416488ms 14:14:21 policy-db-migrator | 67 0760-toscarelationshiptypes_toscarelationshiptype.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:56 14:14:21 policy-pap | socket.connection.setup.timeout.max.ms = 30000 14:14:21 kafka | [2024-04-09 14:12:25,328] INFO Created log for partition __consumer_offsets-16 in /var/lib/kafka/data/__consumer_offsets-16 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.559948072Z level=info msg="Executing migration" id="add origin to plugin seed_assignment" 14:14:21 policy-db-migrator | 68 0770-toscarequirement.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:56 14:14:21 policy-pap | socket.connection.setup.timeout.ms = 10000 14:14:21 kafka | [2024-04-09 14:12:25,328] INFO [Partition __consumer_offsets-16 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-16 (kafka.cluster.Partition) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.560281978Z level=info msg="Migration successfully executed" id="add origin to plugin seed_assignment" duration=335.286µs 14:14:21 policy-db-migrator | 69 0780-toscarequirements.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:56 14:14:21 policy-pap | ssl.cipher.suites = null 14:14:21 kafka | [2024-04-09 14:12:25,329] INFO [Partition __consumer_offsets-16 broker=1] Log loaded for partition __consumer_offsets-16 with initial high watermark 0 (kafka.cluster.Partition) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.567566272Z level=info msg="Executing migration" id="prevent seeding OnCall access" 14:14:21 policy-db-migrator | 70 0790-toscarequirements_toscarequirement.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:57 14:14:21 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.567803897Z level=info msg="Migration successfully executed" id="prevent seeding OnCall access" duration=241.305µs 14:14:21 policy-db-migrator | 71 0800-toscaservicetemplate.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:57 14:14:21 policy-pap | ssl.endpoint.identification.algorithm = https 14:14:21 kafka | [2024-04-09 14:12:25,329] INFO [Broker id=1] Leader __consumer_offsets-16 with topic id Some(JIxyITR5QGSmI5P2pGX22A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.570907794Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated fixed migration" 14:14:21 policy-db-migrator | 72 0810-toscatopologytemplate.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:57 14:14:21 policy-pap | ssl.engine.factory.class = null 14:14:21 kafka | [2024-04-09 14:12:25,335] INFO [LogLoader partition=__consumer_offsets-2, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.571124078Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated fixed migration" duration=216.664µs 14:14:21 policy-db-migrator | 73 0820-toscatrigger.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:57 14:14:21 policy-pap | ssl.key.password = null 14:14:21 kafka | [2024-04-09 14:12:25,335] INFO Created log for partition __consumer_offsets-2 in /var/lib/kafka/data/__consumer_offsets-2 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.574357728Z level=info msg="Executing migration" id="managed folder permissions library panel actions migration" 14:14:21 policy-db-migrator | 74 0830-FK_ToscaNodeTemplate_capabilitiesName.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:57 14:14:21 policy-pap | ssl.keymanager.algorithm = SunX509 14:14:21 kafka | [2024-04-09 14:12:25,335] INFO [Partition __consumer_offsets-2 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-2 (kafka.cluster.Partition) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.574566022Z level=info msg="Migration successfully executed" id="managed folder permissions library panel actions migration" duration=209.184µs 14:14:21 policy-db-migrator | 75 0840-FK_ToscaNodeTemplate_requirementsName.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:57 14:14:21 policy-pap | ssl.keystore.certificate.chain = null 14:14:21 kafka | [2024-04-09 14:12:25,335] INFO [Partition __consumer_offsets-2 broker=1] Log loaded for partition __consumer_offsets-2 with initial high watermark 0 (kafka.cluster.Partition) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.581155133Z level=info msg="Executing migration" id="migrate external alertmanagers to datsourcse" 14:14:21 policy-db-migrator | 76 0850-FK_ToscaNodeType_requirementsName.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:57 14:14:21 policy-pap | ssl.keystore.key = null 14:14:21 kafka | [2024-04-09 14:12:25,335] INFO [Broker id=1] Leader __consumer_offsets-2 with topic id Some(JIxyITR5QGSmI5P2pGX22A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.581373217Z level=info msg="Migration successfully executed" id="migrate external alertmanagers to datsourcse" duration=219.404µs 14:14:21 policy-db-migrator | 77 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:57 14:14:21 policy-pap | ssl.keystore.location = null 14:14:21 kafka | [2024-04-09 14:12:25,346] INFO [LogLoader partition=__consumer_offsets-25, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.591866521Z level=info msg="Executing migration" id="create folder table" 14:14:21 policy-db-migrator | 78 0870-FK_ToscaServiceTemplate_dataTypesName.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:57 14:14:21 policy-pap | ssl.keystore.password = null 14:14:21 kafka | [2024-04-09 14:12:25,346] INFO Created log for partition __consumer_offsets-25 in /var/lib/kafka/data/__consumer_offsets-25 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.593098594Z level=info msg="Migration successfully executed" id="create folder table" duration=1.234063ms 14:14:21 policy-db-migrator | 79 0880-FK_ToscaServiceTemplate_nodeTypesName.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:57 14:14:21 policy-pap | ssl.keystore.type = JKS 14:14:21 kafka | [2024-04-09 14:12:25,346] INFO [Partition __consumer_offsets-25 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-25 (kafka.cluster.Partition) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.596073909Z level=info msg="Executing migration" id="Add index for parent_uid" 14:14:21 policy-db-migrator | 80 0890-FK_ToscaServiceTemplate_policyTypesName.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:57 14:14:21 policy-pap | ssl.protocol = TLSv1.3 14:14:21 kafka | [2024-04-09 14:12:25,346] INFO [Partition __consumer_offsets-25 broker=1] Log loaded for partition __consumer_offsets-25 with initial high watermark 0 (kafka.cluster.Partition) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.597649698Z level=info msg="Migration successfully executed" id="Add index for parent_uid" duration=1.574969ms 14:14:21 policy-db-migrator | 81 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:57 14:14:21 kafka | [2024-04-09 14:12:25,347] INFO [Broker id=1] Leader __consumer_offsets-25 with topic id Some(JIxyITR5QGSmI5P2pGX22A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.601886776Z level=info msg="Executing migration" id="Add unique index for folder.uid and folder.org_id" 14:14:21 policy-db-migrator | 82 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:57 14:14:21 policy-pap | ssl.provider = null 14:14:21 kafka | [2024-04-09 14:12:25,352] INFO [LogLoader partition=__consumer_offsets-40, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.603316683Z level=info msg="Migration successfully executed" id="Add unique index for folder.uid and folder.org_id" duration=1.429627ms 14:14:21 policy-db-migrator | 83 0920-FK_ToscaTopologyTemplate_policyName.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:57 14:14:21 policy-pap | ssl.secure.random.implementation = null 14:14:21 kafka | [2024-04-09 14:12:25,353] INFO Created log for partition __consumer_offsets-40 in /var/lib/kafka/data/__consumer_offsets-40 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.606716226Z level=info msg="Executing migration" id="Update folder title length" 14:14:21 policy-db-migrator | 84 0940-PdpPolicyStatus_PdpGroup.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:57 14:14:21 policy-pap | ssl.trustmanager.algorithm = PKIX 14:14:21 policy-pap | ssl.truststore.certificates = null 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.606882729Z level=info msg="Migration successfully executed" id="Update folder title length" duration=166.383µs 14:14:21 policy-db-migrator | 85 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:57 14:14:21 kafka | [2024-04-09 14:12:25,353] INFO [Partition __consumer_offsets-40 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-40 (kafka.cluster.Partition) 14:14:21 policy-pap | ssl.truststore.location = null 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.609708891Z level=info msg="Executing migration" id="Add unique index for folder.title and folder.parent_uid" 14:14:21 policy-db-migrator | 86 0960-FK_ToscaNodeTemplate_capabilitiesName.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:57 14:14:21 kafka | [2024-04-09 14:12:25,353] INFO [Partition __consumer_offsets-40 broker=1] Log loaded for partition __consumer_offsets-40 with initial high watermark 0 (kafka.cluster.Partition) 14:14:21 policy-pap | ssl.truststore.password = null 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.611072476Z level=info msg="Migration successfully executed" id="Add unique index for folder.title and folder.parent_uid" duration=1.364885ms 14:14:21 policy-db-migrator | 87 0970-FK_ToscaNodeTemplate_requirementsName.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:57 14:14:21 kafka | [2024-04-09 14:12:25,353] INFO [Broker id=1] Leader __consumer_offsets-40 with topic id Some(JIxyITR5QGSmI5P2pGX22A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:14:21 policy-pap | ssl.truststore.type = JKS 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.616265242Z level=info msg="Executing migration" id="Remove unique index for folder.title and folder.parent_uid" 14:14:21 policy-db-migrator | 88 0980-FK_ToscaNodeType_requirementsName.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:57 14:14:21 kafka | [2024-04-09 14:12:25,361] INFO [LogLoader partition=__consumer_offsets-47, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:14:21 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.617611697Z level=info msg="Migration successfully executed" id="Remove unique index for folder.title and folder.parent_uid" duration=1.344905ms 14:14:21 kafka | [2024-04-09 14:12:25,362] INFO Created log for partition __consumer_offsets-47 in /var/lib/kafka/data/__consumer_offsets-47 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:14:21 policy-pap | 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.620882197Z level=info msg="Executing migration" id="Add unique index for title, parent_uid, and org_id" 14:14:21 policy-db-migrator | 89 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:58 14:14:21 kafka | [2024-04-09 14:12:25,362] INFO [Partition __consumer_offsets-47 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-47 (kafka.cluster.Partition) 14:14:21 policy-pap | [2024-04-09T14:12:24.297+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.623071718Z level=info msg="Migration successfully executed" id="Add unique index for title, parent_uid, and org_id" duration=2.189121ms 14:14:21 policy-db-migrator | 90 1000-FK_ToscaServiceTemplate_dataTypesName.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:58 14:14:21 kafka | [2024-04-09 14:12:25,362] INFO [Partition __consumer_offsets-47 broker=1] Log loaded for partition __consumer_offsets-47 with initial high watermark 0 (kafka.cluster.Partition) 14:14:21 policy-pap | [2024-04-09T14:12:24.297+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.628315485Z level=info msg="Executing migration" id="Sync dashboard and folder table" 14:14:21 policy-db-migrator | 91 1010-FK_ToscaServiceTemplate_nodeTypesName.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:58 14:14:21 kafka | [2024-04-09 14:12:25,363] INFO [Broker id=1] Leader __consumer_offsets-47 with topic id Some(JIxyITR5QGSmI5P2pGX22A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:14:21 policy-pap | [2024-04-09T14:12:24.297+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1712671944297 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.628863725Z level=info msg="Migration successfully executed" id="Sync dashboard and folder table" duration=547.93µs 14:14:21 policy-db-migrator | 92 1020-FK_ToscaServiceTemplate_policyTypesName.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:58 14:14:21 kafka | [2024-04-09 14:12:25,369] INFO [LogLoader partition=__consumer_offsets-17, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:14:21 policy-pap | [2024-04-09T14:12:24.297+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.634714943Z level=info msg="Executing migration" id="Remove ghost folders from the folder table" 14:14:21 policy-db-migrator | 93 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:58 14:14:21 policy-pap | [2024-04-09T14:12:24.298+00:00|INFO|ServiceManager|main] Policy PAP starting topics 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.63562989Z level=info msg="Migration successfully executed" id="Remove ghost folders from the folder table" duration=915.407µs 14:14:21 policy-db-migrator | 94 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:58 14:14:21 kafka | [2024-04-09 14:12:25,369] INFO Created log for partition __consumer_offsets-17 in /var/lib/kafka/data/__consumer_offsets-17 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.638646956Z level=info msg="Executing migration" id="Remove unique index UQE_folder_uid_org_id" 14:14:21 policy-db-migrator | 95 1050-FK_ToscaTopologyTemplate_policyName.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:58 14:14:21 policy-pap | [2024-04-09T14:12:24.298+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=6736d7e9-6714-4f8e-b97c-2edf4d38cb1b, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-heartbeat,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 14:14:21 kafka | [2024-04-09 14:12:25,369] INFO [Partition __consumer_offsets-17 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-17 (kafka.cluster.Partition) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.640538711Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_uid_org_id" duration=1.892075ms 14:14:21 policy-db-migrator | 96 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql upgrade 0 0800 0904241411540800u 1 2024-04-09 14:11:58 14:14:21 policy-pap | [2024-04-09T14:12:24.298+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=8886bf5a-38da-4c7c-af7d-ca09814a22ad, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 14:14:21 kafka | [2024-04-09 14:12:25,369] INFO [Partition __consumer_offsets-17 broker=1] Log loaded for partition __consumer_offsets-17 with initial high watermark 0 (kafka.cluster.Partition) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.643692639Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_uid" 14:14:21 policy-db-migrator | 97 0100-pdp.sql upgrade 0800 0900 0904241411540900u 1 2024-04-09 14:11:58 14:14:21 policy-pap | [2024-04-09T14:12:24.298+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=564a2e6e-474f-4e32-b0d5-9fb32de5e450, alive=false, publisher=null]]: starting 14:14:21 kafka | [2024-04-09 14:12:25,369] INFO [Broker id=1] Leader __consumer_offsets-17 with topic id Some(JIxyITR5QGSmI5P2pGX22A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.644976423Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_uid" duration=1.283304ms 14:14:21 policy-db-migrator | 98 0110-idx_tsidx1.sql upgrade 0800 0900 0904241411540900u 1 2024-04-09 14:11:58 14:14:21 policy-pap | [2024-04-09T14:12:24.313+00:00|INFO|ProducerConfig|main] ProducerConfig values: 14:14:21 kafka | [2024-04-09 14:12:25,377] INFO [LogLoader partition=__consumer_offsets-32, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.651029764Z level=info msg="Executing migration" id="Remove unique index UQE_folder_title_parent_uid_org_id" 14:14:21 policy-db-migrator | 99 0120-pk_pdpstatistics.sql upgrade 0800 0900 0904241411540900u 1 2024-04-09 14:11:58 14:14:21 policy-pap | acks = -1 14:14:21 kafka | [2024-04-09 14:12:25,377] INFO Created log for partition __consumer_offsets-32 in /var/lib/kafka/data/__consumer_offsets-32 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.652467251Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_title_parent_uid_org_id" duration=1.437707ms 14:14:21 policy-db-migrator | 100 0130-pdpstatistics.sql upgrade 0800 0900 0904241411540900u 1 2024-04-09 14:11:58 14:14:21 policy-pap | auto.include.jmx.reporter = true 14:14:21 kafka | [2024-04-09 14:12:25,377] INFO [Partition __consumer_offsets-32 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-32 (kafka.cluster.Partition) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.655330434Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_parent_uid_title" 14:14:21 policy-db-migrator | 101 0140-pk_pdpstatistics.sql upgrade 0800 0900 0904241411540900u 1 2024-04-09 14:11:58 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.656631458Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_parent_uid_title" duration=1.300814ms 14:14:21 kafka | [2024-04-09 14:12:25,378] INFO [Partition __consumer_offsets-32 broker=1] Log loaded for partition __consumer_offsets-32 with initial high watermark 0 (kafka.cluster.Partition) 14:14:21 policy-db-migrator | 102 0150-pdpstatistics.sql upgrade 0800 0900 0904241411540900u 1 2024-04-09 14:11:58 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.660263915Z level=info msg="Executing migration" id="Remove index IDX_folder_parent_uid_org_id" 14:14:21 policy-pap | batch.size = 16384 14:14:21 kafka | [2024-04-09 14:12:25,378] INFO [Broker id=1] Leader __consumer_offsets-32 with topic id Some(JIxyITR5QGSmI5P2pGX22A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:14:21 policy-db-migrator | 103 0160-jpapdpstatistics_enginestats.sql upgrade 0800 0900 0904241411540900u 1 2024-04-09 14:11:58 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.661449077Z level=info msg="Migration successfully executed" id="Remove index IDX_folder_parent_uid_org_id" duration=1.185662ms 14:14:21 policy-pap | bootstrap.servers = [kafka:9092] 14:14:21 kafka | [2024-04-09 14:12:25,384] INFO [LogLoader partition=__consumer_offsets-37, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:14:21 policy-db-migrator | 104 0170-jpapdpstatistics_enginestats.sql upgrade 0800 0900 0904241411540900u 1 2024-04-09 14:11:58 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.665818208Z level=info msg="Executing migration" id="create anon_device table" 14:14:21 policy-pap | buffer.memory = 33554432 14:14:21 kafka | [2024-04-09 14:12:25,384] INFO Created log for partition __consumer_offsets-37 in /var/lib/kafka/data/__consumer_offsets-37 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:14:21 policy-db-migrator | 105 0180-jpapdpstatistics_enginestats.sql upgrade 0800 0900 0904241411540900u 1 2024-04-09 14:11:58 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.667450638Z level=info msg="Migration successfully executed" id="create anon_device table" duration=1.63199ms 14:14:21 policy-pap | client.dns.lookup = use_all_dns_ips 14:14:21 kafka | [2024-04-09 14:12:25,384] INFO [Partition __consumer_offsets-37 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-37 (kafka.cluster.Partition) 14:14:21 policy-db-migrator | 106 0190-jpapolicyaudit.sql upgrade 0800 0900 0904241411540900u 1 2024-04-09 14:11:58 14:14:21 policy-pap | client.id = producer-1 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.673641812Z level=info msg="Executing migration" id="add unique index anon_device.device_id" 14:14:21 kafka | [2024-04-09 14:12:25,384] INFO [Partition __consumer_offsets-37 broker=1] Log loaded for partition __consumer_offsets-37 with initial high watermark 0 (kafka.cluster.Partition) 14:14:21 policy-db-migrator | 107 0200-JpaPolicyAuditIndex_timestamp.sql upgrade 0800 0900 0904241411540900u 1 2024-04-09 14:11:59 14:14:21 policy-pap | compression.type = none 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.675062849Z level=info msg="Migration successfully executed" id="add unique index anon_device.device_id" duration=1.421107ms 14:14:21 kafka | [2024-04-09 14:12:25,385] INFO [Broker id=1] Leader __consumer_offsets-37 with topic id Some(JIxyITR5QGSmI5P2pGX22A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:14:21 policy-db-migrator | 108 0210-sequence.sql upgrade 0800 0900 0904241411540900u 1 2024-04-09 14:11:59 14:14:21 policy-pap | connections.max.idle.ms = 540000 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.679955969Z level=info msg="Executing migration" id="add index anon_device.updated_at" 14:14:21 kafka | [2024-04-09 14:12:25,395] INFO [LogLoader partition=__consumer_offsets-7, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:14:21 policy-db-migrator | 109 0220-sequence.sql upgrade 0800 0900 0904241411540900u 1 2024-04-09 14:11:59 14:14:21 policy-pap | delivery.timeout.ms = 120000 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.682038927Z level=info msg="Migration successfully executed" id="add index anon_device.updated_at" duration=2.079678ms 14:14:21 kafka | [2024-04-09 14:12:25,396] INFO Created log for partition __consumer_offsets-7 in /var/lib/kafka/data/__consumer_offsets-7 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:14:21 policy-db-migrator | 110 0100-jpatoscapolicy_targets.sql upgrade 0900 1000 0904241411541000u 1 2024-04-09 14:11:59 14:14:21 policy-pap | enable.idempotence = true 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.685279127Z level=info msg="Executing migration" id="create signing_key table" 14:14:21 kafka | [2024-04-09 14:12:25,396] INFO [Partition __consumer_offsets-7 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-7 (kafka.cluster.Partition) 14:14:21 policy-db-migrator | 111 0110-jpatoscapolicytype_targets.sql upgrade 0900 1000 0904241411541000u 1 2024-04-09 14:11:59 14:14:21 policy-pap | interceptor.classes = [] 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.686381578Z level=info msg="Migration successfully executed" id="create signing_key table" duration=1.105011ms 14:14:21 kafka | [2024-04-09 14:12:25,396] INFO [Partition __consumer_offsets-7 broker=1] Log loaded for partition __consumer_offsets-7 with initial high watermark 0 (kafka.cluster.Partition) 14:14:21 policy-db-migrator | 112 0120-toscatrigger.sql upgrade 0900 1000 0904241411541000u 1 2024-04-09 14:11:59 14:14:21 policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.689996694Z level=info msg="Executing migration" id="add unique index signing_key.key_id" 14:14:21 kafka | [2024-04-09 14:12:25,396] INFO [Broker id=1] Leader __consumer_offsets-7 with topic id Some(JIxyITR5QGSmI5P2pGX22A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:14:21 policy-db-migrator | 113 0130-jpatoscapolicytype_triggers.sql upgrade 0900 1000 0904241411541000u 1 2024-04-09 14:11:59 14:14:21 policy-pap | linger.ms = 0 14:14:21 kafka | [2024-04-09 14:12:25,407] INFO [LogLoader partition=__consumer_offsets-22, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:14:21 policy-db-migrator | 114 0140-toscaparameter.sql upgrade 0900 1000 0904241411541000u 1 2024-04-09 14:11:59 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.691584014Z level=info msg="Migration successfully executed" id="add unique index signing_key.key_id" duration=1.58709ms 14:14:21 policy-pap | max.block.ms = 60000 14:14:21 kafka | [2024-04-09 14:12:25,407] INFO Created log for partition __consumer_offsets-22 in /var/lib/kafka/data/__consumer_offsets-22 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:14:21 policy-db-migrator | 115 0150-toscaproperty.sql upgrade 0900 1000 0904241411541000u 1 2024-04-09 14:11:59 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.696101797Z level=info msg="Executing migration" id="set legacy alert migration status in kvstore" 14:14:21 policy-pap | max.in.flight.requests.per.connection = 5 14:14:21 kafka | [2024-04-09 14:12:25,407] INFO [Partition __consumer_offsets-22 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-22 (kafka.cluster.Partition) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.697374691Z level=info msg="Migration successfully executed" id="set legacy alert migration status in kvstore" duration=1.272874ms 14:14:21 policy-db-migrator | 116 0160-jpapolicyaudit_pk.sql upgrade 0900 1000 0904241411541000u 1 2024-04-09 14:11:59 14:14:21 policy-pap | max.request.size = 1048576 14:14:21 kafka | [2024-04-09 14:12:25,407] INFO [Partition __consumer_offsets-22 broker=1] Log loaded for partition __consumer_offsets-22 with initial high watermark 0 (kafka.cluster.Partition) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.700331865Z level=info msg="Executing migration" id="migrate record of created folders during legacy migration to kvstore" 14:14:21 policy-db-migrator | 117 0170-pdpstatistics_pk.sql upgrade 0900 1000 0904241411541000u 1 2024-04-09 14:11:59 14:14:21 policy-pap | metadata.max.age.ms = 300000 14:14:21 kafka | [2024-04-09 14:12:25,408] INFO [Broker id=1] Leader __consumer_offsets-22 with topic id Some(JIxyITR5QGSmI5P2pGX22A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.700855385Z level=info msg="Migration successfully executed" id="migrate record of created folders during legacy migration to kvstore" duration=464.759µs 14:14:21 policy-db-migrator | 118 0180-jpatoscanodetemplate_metadata.sql upgrade 0900 1000 0904241411541000u 1 2024-04-09 14:11:59 14:14:21 policy-pap | metadata.max.idle.ms = 300000 14:14:21 kafka | [2024-04-09 14:12:25,418] INFO [LogLoader partition=__consumer_offsets-29, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.706214424Z level=info msg="Executing migration" id="Add folder_uid for dashboard" 14:14:21 policy-db-migrator | 119 0100-upgrade.sql upgrade 1000 1100 0904241411541100u 1 2024-04-09 14:11:59 14:14:21 policy-pap | metric.reporters = [] 14:14:21 kafka | [2024-04-09 14:12:25,419] INFO Created log for partition __consumer_offsets-29 in /var/lib/kafka/data/__consumer_offsets-29 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.718114424Z level=info msg="Migration successfully executed" id="Add folder_uid for dashboard" duration=11.88318ms 14:14:21 policy-db-migrator | 120 0100-jpapolicyaudit_renameuser.sql upgrade 1100 1200 0904241411541200u 1 2024-04-09 14:11:59 14:14:21 policy-pap | metrics.num.samples = 2 14:14:21 kafka | [2024-04-09 14:12:25,419] INFO [Partition __consumer_offsets-29 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-29 (kafka.cluster.Partition) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.722365383Z level=info msg="Executing migration" id="Populate dashboard folder_uid column" 14:14:21 policy-db-migrator | 121 0110-idx_tsidx1.sql upgrade 1100 1200 0904241411541200u 1 2024-04-09 14:11:59 14:14:21 policy-pap | metrics.recording.level = INFO 14:14:21 kafka | [2024-04-09 14:12:25,419] INFO [Partition __consumer_offsets-29 broker=1] Log loaded for partition __consumer_offsets-29 with initial high watermark 0 (kafka.cluster.Partition) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.723211978Z level=info msg="Migration successfully executed" id="Populate dashboard folder_uid column" duration=847.365µs 14:14:21 policy-db-migrator | 122 0120-audit_sequence.sql upgrade 1100 1200 0904241411541200u 1 2024-04-09 14:11:59 14:14:21 policy-pap | metrics.sample.window.ms = 30000 14:14:21 kafka | [2024-04-09 14:12:25,419] INFO [Broker id=1] Leader __consumer_offsets-29 with topic id Some(JIxyITR5QGSmI5P2pGX22A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.726049231Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title" 14:14:21 policy-db-migrator | 123 0130-statistics_sequence.sql upgrade 1100 1200 0904241411541200u 1 2024-04-09 14:11:59 14:14:21 policy-pap | partitioner.adaptive.partitioning.enable = true 14:14:21 kafka | [2024-04-09 14:12:25,431] INFO [LogLoader partition=__consumer_offsets-44, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.727174191Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title" duration=1.12486ms 14:14:21 policy-db-migrator | 124 0100-pdpstatistics.sql upgrade 1200 1300 0904241411541300u 1 2024-04-09 14:11:59 14:14:21 policy-pap | partitioner.availability.timeout.ms = 0 14:14:21 kafka | [2024-04-09 14:12:25,432] INFO Created log for partition __consumer_offsets-44 in /var/lib/kafka/data/__consumer_offsets-44 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.730235958Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_id_title" 14:14:21 policy-db-migrator | 125 0110-jpapdpstatistics_enginestats.sql upgrade 1200 1300 0904241411541300u 1 2024-04-09 14:11:59 14:14:21 policy-pap | partitioner.class = null 14:14:21 kafka | [2024-04-09 14:12:25,432] INFO [Partition __consumer_offsets-44 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-44 (kafka.cluster.Partition) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.731526451Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_id_title" duration=1.290433ms 14:14:21 policy-db-migrator | 126 0120-statistics_sequence.sql upgrade 1200 1300 0904241411541300u 1 2024-04-09 14:12:00 14:14:21 policy-pap | partitioner.ignore.keys = false 14:14:21 kafka | [2024-04-09 14:12:25,432] INFO [Partition __consumer_offsets-44 broker=1] Log loaded for partition __consumer_offsets-44 with initial high watermark 0 (kafka.cluster.Partition) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.734315773Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_uid_title" 14:14:21 policy-db-migrator | policyadmin: OK @ 1300 14:14:21 policy-pap | receive.buffer.bytes = 32768 14:14:21 kafka | [2024-04-09 14:12:25,432] INFO [Broker id=1] Leader __consumer_offsets-44 with topic id Some(JIxyITR5QGSmI5P2pGX22A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.735554876Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_uid_title" duration=1.239353ms 14:14:21 policy-pap | reconnect.backoff.max.ms = 1000 14:14:21 kafka | [2024-04-09 14:12:25,441] INFO [LogLoader partition=__consumer_offsets-14, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.741329062Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" 14:14:21 policy-pap | reconnect.backoff.ms = 50 14:14:21 kafka | [2024-04-09 14:12:25,442] INFO Created log for partition __consumer_offsets-14 in /var/lib/kafka/data/__consumer_offsets-14 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.742678007Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" duration=1.348715ms 14:14:21 policy-pap | request.timeout.ms = 30000 14:14:21 kafka | [2024-04-09 14:12:25,442] INFO [Partition __consumer_offsets-14 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-14 (kafka.cluster.Partition) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.747379494Z level=info msg="Executing migration" id="Restore index for dashboard_org_id_folder_id_title" 14:14:21 policy-pap | retries = 2147483647 14:14:21 kafka | [2024-04-09 14:12:25,442] INFO [Partition __consumer_offsets-14 broker=1] Log loaded for partition __consumer_offsets-14 with initial high watermark 0 (kafka.cluster.Partition) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.749568905Z level=info msg="Migration successfully executed" id="Restore index for dashboard_org_id_folder_id_title" duration=2.188211ms 14:14:21 policy-pap | retry.backoff.ms = 100 14:14:21 kafka | [2024-04-09 14:12:25,442] INFO [Broker id=1] Leader __consumer_offsets-14 with topic id Some(JIxyITR5QGSmI5P2pGX22A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.753899705Z level=info msg="Executing migration" id="create sso_setting table" 14:14:21 policy-pap | sasl.client.callback.handler.class = null 14:14:21 kafka | [2024-04-09 14:12:25,458] INFO [LogLoader partition=__consumer_offsets-23, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.755983313Z level=info msg="Migration successfully executed" id="create sso_setting table" duration=2.076448ms 14:14:21 policy-pap | sasl.jaas.config = null 14:14:21 kafka | [2024-04-09 14:12:25,460] INFO Created log for partition __consumer_offsets-23 in /var/lib/kafka/data/__consumer_offsets-23 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.760752261Z level=info msg="Executing migration" id="copy kvstore migration status to each org" 14:14:21 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 14:14:21 kafka | [2024-04-09 14:12:25,460] INFO [Partition __consumer_offsets-23 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-23 (kafka.cluster.Partition) 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.762710028Z level=info msg="Migration successfully executed" id="copy kvstore migration status to each org" duration=1.959146ms 14:14:21 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.765623971Z level=info msg="Executing migration" id="add back entry for orgid=0 migrated status" 14:14:21 kafka | [2024-04-09 14:12:25,460] INFO [Partition __consumer_offsets-23 broker=1] Log loaded for partition __consumer_offsets-23 with initial high watermark 0 (kafka.cluster.Partition) 14:14:21 policy-pap | sasl.kerberos.service.name = null 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.765993188Z level=info msg="Migration successfully executed" id="add back entry for orgid=0 migrated status" duration=370.217µs 14:14:21 kafka | [2024-04-09 14:12:25,460] INFO [Broker id=1] Leader __consumer_offsets-23 with topic id Some(JIxyITR5QGSmI5P2pGX22A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:14:21 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.770073404Z level=info msg="Executing migration" id="alter kv_store.value to longtext" 14:14:21 kafka | [2024-04-09 14:12:25,468] INFO [LogLoader partition=__consumer_offsets-38, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:14:21 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.770264087Z level=info msg="Migration successfully executed" id="alter kv_store.value to longtext" duration=189.983µs 14:14:21 kafka | [2024-04-09 14:12:25,469] INFO Created log for partition __consumer_offsets-38 in /var/lib/kafka/data/__consumer_offsets-38 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:14:21 policy-pap | sasl.login.callback.handler.class = null 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.772883436Z level=info msg="Executing migration" id="add notification_settings column to alert_rule table" 14:14:21 kafka | [2024-04-09 14:12:25,469] INFO [Partition __consumer_offsets-38 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-38 (kafka.cluster.Partition) 14:14:21 policy-pap | sasl.login.class = null 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.782247229Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule table" duration=9.363183ms 14:14:21 kafka | [2024-04-09 14:12:25,469] INFO [Partition __consumer_offsets-38 broker=1] Log loaded for partition __consumer_offsets-38 with initial high watermark 0 (kafka.cluster.Partition) 14:14:21 policy-pap | sasl.login.connect.timeout.ms = null 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.786542798Z level=info msg="Executing migration" id="add notification_settings column to alert_rule_version table" 14:14:21 kafka | [2024-04-09 14:12:25,469] INFO [Broker id=1] Leader __consumer_offsets-38 with topic id Some(JIxyITR5QGSmI5P2pGX22A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:14:21 policy-pap | sasl.login.read.timeout.ms = null 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.795782179Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule_version table" duration=9.238421ms 14:14:21 kafka | [2024-04-09 14:12:25,483] INFO [LogLoader partition=__consumer_offsets-8, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:14:21 policy-pap | sasl.login.refresh.buffer.seconds = 300 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.79854781Z level=info msg="Executing migration" id="removing scope from alert.instances:read action migration" 14:14:21 kafka | [2024-04-09 14:12:25,484] INFO Created log for partition __consumer_offsets-8 in /var/lib/kafka/data/__consumer_offsets-8 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:14:21 policy-pap | sasl.login.refresh.min.period.seconds = 60 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.799012898Z level=info msg="Migration successfully executed" id="removing scope from alert.instances:read action migration" duration=464.768µs 14:14:21 kafka | [2024-04-09 14:12:25,484] INFO [Partition __consumer_offsets-8 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-8 (kafka.cluster.Partition) 14:14:21 policy-pap | sasl.login.refresh.window.factor = 0.8 14:14:21 grafana | logger=migrator t=2024-04-09T14:11:55.804019341Z level=info msg="migrations completed" performed=548 skipped=0 duration=4.957264176s 14:14:21 kafka | [2024-04-09 14:12:25,484] INFO [Partition __consumer_offsets-8 broker=1] Log loaded for partition __consumer_offsets-8 with initial high watermark 0 (kafka.cluster.Partition) 14:14:21 policy-pap | sasl.login.refresh.window.jitter = 0.05 14:14:21 grafana | logger=sqlstore t=2024-04-09T14:11:55.813678179Z level=info msg="Created default admin" user=admin 14:14:21 kafka | [2024-04-09 14:12:25,484] INFO [Broker id=1] Leader __consumer_offsets-8 with topic id Some(JIxyITR5QGSmI5P2pGX22A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:14:21 policy-pap | sasl.login.retry.backoff.max.ms = 10000 14:14:21 grafana | logger=sqlstore t=2024-04-09T14:11:55.814060466Z level=info msg="Created default organization" 14:14:21 kafka | [2024-04-09 14:12:25,496] INFO [LogLoader partition=policy-pdp-pap-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:14:21 policy-pap | sasl.login.retry.backoff.ms = 100 14:14:21 grafana | logger=secrets t=2024-04-09T14:11:55.819777382Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1 14:14:21 kafka | [2024-04-09 14:12:25,497] INFO Created log for partition policy-pdp-pap-0 in /var/lib/kafka/data/policy-pdp-pap-0 with properties {} (kafka.log.LogManager) 14:14:21 policy-pap | sasl.mechanism = GSSAPI 14:14:21 grafana | logger=plugin.store t=2024-04-09T14:11:55.840590977Z level=info msg="Loading plugins..." 14:14:21 kafka | [2024-04-09 14:12:25,498] INFO [Partition policy-pdp-pap-0 broker=1] No checkpointed highwatermark is found for partition policy-pdp-pap-0 (kafka.cluster.Partition) 14:14:21 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 14:14:21 grafana | logger=local.finder t=2024-04-09T14:11:55.883925877Z level=warn msg="Skipping finding plugins as directory does not exist" path=/usr/share/grafana/plugins-bundled 14:14:21 kafka | [2024-04-09 14:12:25,498] INFO [Partition policy-pdp-pap-0 broker=1] Log loaded for partition policy-pdp-pap-0 with initial high watermark 0 (kafka.cluster.Partition) 14:14:21 policy-pap | sasl.oauthbearer.expected.audience = null 14:14:21 grafana | logger=plugin.store t=2024-04-09T14:11:55.884031579Z level=info msg="Plugins loaded" count=55 duration=43.424362ms 14:14:21 kafka | [2024-04-09 14:12:25,499] INFO [Broker id=1] Leader policy-pdp-pap-0 with topic id Some(ITmYpZ6rSK-iF5o_1J2T3Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:14:21 policy-pap | sasl.oauthbearer.expected.issuer = null 14:14:21 grafana | logger=query_data t=2024-04-09T14:11:55.886518695Z level=info msg="Query Service initialization" 14:14:21 kafka | [2024-04-09 14:12:25,538] INFO [LogLoader partition=__consumer_offsets-45, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:14:21 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 14:14:21 grafana | logger=live.push_http t=2024-04-09T14:11:55.890193963Z level=info msg="Live Push Gateway initialization" 14:14:21 kafka | [2024-04-09 14:12:25,538] INFO Created log for partition __consumer_offsets-45 in /var/lib/kafka/data/__consumer_offsets-45 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:14:21 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 14:14:21 grafana | logger=ngalert.migration t=2024-04-09T14:11:55.896122553Z level=info msg=Starting 14:14:21 kafka | [2024-04-09 14:12:25,538] INFO [Partition __consumer_offsets-45 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-45 (kafka.cluster.Partition) 14:14:21 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 14:14:21 grafana | logger=ngalert.migration t=2024-04-09T14:11:55.89652894Z level=info msg="Applying transition" currentType=Legacy desiredType=UnifiedAlerting cleanOnDowngrade=false cleanOnUpgrade=false 14:14:21 kafka | [2024-04-09 14:12:25,538] INFO [Partition __consumer_offsets-45 broker=1] Log loaded for partition __consumer_offsets-45 with initial high watermark 0 (kafka.cluster.Partition) 14:14:21 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 14:14:21 grafana | logger=ngalert.migration orgID=1 t=2024-04-09T14:11:55.896956728Z level=info msg="Migrating alerts for organisation" 14:14:21 kafka | [2024-04-09 14:12:25,538] INFO [Broker id=1] Leader __consumer_offsets-45 with topic id Some(JIxyITR5QGSmI5P2pGX22A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:14:21 policy-pap | sasl.oauthbearer.scope.claim.name = scope 14:14:21 grafana | logger=ngalert.migration orgID=1 t=2024-04-09T14:11:55.89758835Z level=info msg="Alerts found to migrate" alerts=0 14:14:21 kafka | [2024-04-09 14:12:25,545] INFO [LogLoader partition=__consumer_offsets-15, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:14:21 policy-pap | sasl.oauthbearer.sub.claim.name = sub 14:14:21 grafana | logger=ngalert.migration t=2024-04-09T14:11:55.899434234Z level=info msg="Completed alerting migration" 14:14:21 kafka | [2024-04-09 14:12:25,545] INFO Created log for partition __consumer_offsets-15 in /var/lib/kafka/data/__consumer_offsets-15 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:14:21 policy-pap | sasl.oauthbearer.token.endpoint.url = null 14:14:21 grafana | logger=ngalert.state.manager t=2024-04-09T14:11:55.924562798Z level=info msg="Running in alternative execution of Error/NoData mode" 14:14:21 kafka | [2024-04-09 14:12:25,545] INFO [Partition __consumer_offsets-15 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-15 (kafka.cluster.Partition) 14:14:21 policy-pap | security.protocol = PLAINTEXT 14:14:21 grafana | logger=infra.usagestats.collector t=2024-04-09T14:11:55.926955562Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2 14:14:21 kafka | [2024-04-09 14:12:25,545] INFO [Partition __consumer_offsets-15 broker=1] Log loaded for partition __consumer_offsets-15 with initial high watermark 0 (kafka.cluster.Partition) 14:14:21 policy-pap | security.providers = null 14:14:21 grafana | logger=provisioning.datasources t=2024-04-09T14:11:55.930742442Z level=info msg="inserting datasource from configuration" name=PolicyPrometheus uid=dkSf71fnz 14:14:21 kafka | [2024-04-09 14:12:25,545] INFO [Broker id=1] Leader __consumer_offsets-15 with topic id Some(JIxyITR5QGSmI5P2pGX22A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:14:21 policy-pap | send.buffer.bytes = 131072 14:14:21 grafana | logger=provisioning.alerting t=2024-04-09T14:11:55.94574985Z level=info msg="starting to provision alerting" 14:14:21 kafka | [2024-04-09 14:12:25,558] INFO [LogLoader partition=__consumer_offsets-30, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:14:21 policy-pap | socket.connection.setup.timeout.max.ms = 30000 14:14:21 grafana | logger=provisioning.alerting t=2024-04-09T14:11:55.945802121Z level=info msg="finished to provision alerting" 14:14:21 kafka | [2024-04-09 14:12:25,558] INFO Created log for partition __consumer_offsets-30 in /var/lib/kafka/data/__consumer_offsets-30 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:14:21 policy-pap | socket.connection.setup.timeout.ms = 10000 14:14:21 grafana | logger=ngalert.state.manager t=2024-04-09T14:11:55.949974818Z level=info msg="Warming state cache for startup" 14:14:21 kafka | [2024-04-09 14:12:25,559] INFO [Partition __consumer_offsets-30 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-30 (kafka.cluster.Partition) 14:14:21 policy-pap | ssl.cipher.suites = null 14:14:21 grafana | logger=grafanaStorageLogger t=2024-04-09T14:11:55.959404832Z level=info msg="Storage starting" 14:14:21 kafka | [2024-04-09 14:12:25,559] INFO [Partition __consumer_offsets-30 broker=1] Log loaded for partition __consumer_offsets-30 with initial high watermark 0 (kafka.cluster.Partition) 14:14:21 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 14:14:21 grafana | logger=http.server t=2024-04-09T14:11:55.958325652Z level=info msg="HTTP Server Listen" address=[::]:3000 protocol=http subUrl= socket= 14:14:21 kafka | [2024-04-09 14:12:25,560] INFO [Broker id=1] Leader __consumer_offsets-30 with topic id Some(JIxyITR5QGSmI5P2pGX22A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:14:21 policy-pap | ssl.endpoint.identification.algorithm = https 14:14:21 grafana | logger=ngalert.multiorg.alertmanager t=2024-04-09T14:11:55.959666147Z level=info msg="Starting MultiOrg Alertmanager" 14:14:21 kafka | [2024-04-09 14:12:25,566] INFO [LogLoader partition=__consumer_offsets-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:14:21 policy-pap | ssl.engine.factory.class = null 14:14:21 grafana | logger=sqlstore.transactions t=2024-04-09T14:11:55.959793819Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked" 14:14:21 kafka | [2024-04-09 14:12:25,567] INFO Created log for partition __consumer_offsets-0 in /var/lib/kafka/data/__consumer_offsets-0 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:14:21 policy-pap | ssl.key.password = null 14:14:21 grafana | logger=ngalert.state.manager t=2024-04-09T14:11:55.989306214Z level=info msg="State cache has been initialized" states=0 duration=39.326486ms 14:14:21 kafka | [2024-04-09 14:12:25,567] INFO [Partition __consumer_offsets-0 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-0 (kafka.cluster.Partition) 14:14:21 policy-pap | ssl.keymanager.algorithm = SunX509 14:14:21 grafana | logger=ngalert.scheduler t=2024-04-09T14:11:55.989521028Z level=info msg="Starting scheduler" tickInterval=10s maxAttempts=1 14:14:21 kafka | [2024-04-09 14:12:25,567] INFO [Partition __consumer_offsets-0 broker=1] Log loaded for partition __consumer_offsets-0 with initial high watermark 0 (kafka.cluster.Partition) 14:14:21 policy-pap | ssl.keystore.certificate.chain = null 14:14:21 grafana | logger=ticker t=2024-04-09T14:11:55.989789473Z level=info msg=starting first_tick=2024-04-09T14:12:00Z 14:14:21 kafka | [2024-04-09 14:12:25,567] INFO [Broker id=1] Leader __consumer_offsets-0 with topic id Some(JIxyITR5QGSmI5P2pGX22A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:14:21 policy-pap | ssl.keystore.key = null 14:14:21 grafana | logger=provisioning.dashboard t=2024-04-09T14:11:56.018392902Z level=info msg="starting to provision dashboards" 14:14:21 kafka | [2024-04-09 14:12:25,571] INFO [LogLoader partition=__consumer_offsets-35, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:14:21 policy-pap | ssl.keystore.location = null 14:14:21 grafana | logger=grafana.update.checker t=2024-04-09T14:11:56.06105286Z level=info msg="Update check succeeded" duration=112.422247ms 14:14:21 kafka | [2024-04-09 14:12:25,571] INFO Created log for partition __consumer_offsets-35 in /var/lib/kafka/data/__consumer_offsets-35 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:14:21 policy-pap | ssl.keystore.password = null 14:14:21 grafana | logger=sqlstore.transactions t=2024-04-09T14:11:56.073124123Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked" 14:14:21 kafka | [2024-04-09 14:12:25,571] INFO [Partition __consumer_offsets-35 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-35 (kafka.cluster.Partition) 14:14:21 policy-pap | ssl.keystore.type = JKS 14:14:21 grafana | logger=sqlstore.transactions t=2024-04-09T14:11:56.083732339Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=1 code="database is locked" 14:14:21 kafka | [2024-04-09 14:12:25,572] INFO [Partition __consumer_offsets-35 broker=1] Log loaded for partition __consumer_offsets-35 with initial high watermark 0 (kafka.cluster.Partition) 14:14:21 policy-pap | ssl.protocol = TLSv1.3 14:14:21 grafana | logger=plugins.update.checker t=2024-04-09T14:11:56.08813062Z level=info msg="Update check succeeded" duration=142.157726ms 14:14:21 kafka | [2024-04-09 14:12:25,572] INFO [Broker id=1] Leader __consumer_offsets-35 with topic id Some(JIxyITR5QGSmI5P2pGX22A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:14:21 policy-pap | ssl.provider = null 14:14:21 grafana | logger=sqlstore.transactions t=2024-04-09T14:11:56.095271542Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=2 code="database is locked" 14:14:21 kafka | [2024-04-09 14:12:25,577] INFO [LogLoader partition=__consumer_offsets-5, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:14:21 policy-pap | ssl.secure.random.implementation = null 14:14:21 grafana | logger=sqlstore.transactions t=2024-04-09T14:11:56.115385603Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=3 code="database is locked" 14:14:21 kafka | [2024-04-09 14:12:25,578] INFO Created log for partition __consumer_offsets-5 in /var/lib/kafka/data/__consumer_offsets-5 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:14:21 policy-pap | ssl.trustmanager.algorithm = PKIX 14:14:21 grafana | logger=provisioning.dashboard t=2024-04-09T14:11:56.30089465Z level=info msg="finished to provision dashboards" 14:14:21 kafka | [2024-04-09 14:12:25,578] INFO [Partition __consumer_offsets-5 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-5 (kafka.cluster.Partition) 14:14:21 policy-pap | ssl.truststore.certificates = null 14:14:21 grafana | logger=grafana-apiserver t=2024-04-09T14:11:56.383666809Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager" 14:14:21 kafka | [2024-04-09 14:12:25,578] INFO [Partition __consumer_offsets-5 broker=1] Log loaded for partition __consumer_offsets-5 with initial high watermark 0 (kafka.cluster.Partition) 14:14:21 policy-pap | ssl.truststore.location = null 14:14:21 grafana | logger=grafana-apiserver t=2024-04-09T14:11:56.384134958Z level=info msg="Adding GroupVersion featuretoggle.grafana.app v0alpha1 to ResourceManager" 14:14:21 kafka | [2024-04-09 14:12:25,578] INFO [Broker id=1] Leader __consumer_offsets-5 with topic id Some(JIxyITR5QGSmI5P2pGX22A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:14:21 policy-pap | ssl.truststore.password = null 14:14:21 grafana | logger=infra.usagestats t=2024-04-09T14:13:15.956671481Z level=info msg="Usage stats are ready to report" 14:14:21 kafka | [2024-04-09 14:12:25,584] INFO [LogLoader partition=__consumer_offsets-20, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:14:21 policy-pap | ssl.truststore.type = JKS 14:14:21 kafka | [2024-04-09 14:12:25,584] INFO Created log for partition __consumer_offsets-20 in /var/lib/kafka/data/__consumer_offsets-20 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:14:21 policy-pap | transaction.timeout.ms = 60000 14:14:21 kafka | [2024-04-09 14:12:25,584] INFO [Partition __consumer_offsets-20 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-20 (kafka.cluster.Partition) 14:14:21 policy-pap | transactional.id = null 14:14:21 kafka | [2024-04-09 14:12:25,584] INFO [Partition __consumer_offsets-20 broker=1] Log loaded for partition __consumer_offsets-20 with initial high watermark 0 (kafka.cluster.Partition) 14:14:21 policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer 14:14:21 kafka | [2024-04-09 14:12:25,585] INFO [Broker id=1] Leader __consumer_offsets-20 with topic id Some(JIxyITR5QGSmI5P2pGX22A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:14:21 policy-pap | 14:14:21 kafka | [2024-04-09 14:12:25,590] INFO [LogLoader partition=__consumer_offsets-27, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:14:21 policy-pap | [2024-04-09T14:12:24.336+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. 14:14:21 kafka | [2024-04-09 14:12:25,590] INFO Created log for partition __consumer_offsets-27 in /var/lib/kafka/data/__consumer_offsets-27 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:14:21 policy-pap | [2024-04-09T14:12:24.352+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 14:14:21 kafka | [2024-04-09 14:12:25,590] INFO [Partition __consumer_offsets-27 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-27 (kafka.cluster.Partition) 14:14:21 policy-pap | [2024-04-09T14:12:24.352+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 14:14:21 kafka | [2024-04-09 14:12:25,591] INFO [Partition __consumer_offsets-27 broker=1] Log loaded for partition __consumer_offsets-27 with initial high watermark 0 (kafka.cluster.Partition) 14:14:21 policy-pap | [2024-04-09T14:12:24.352+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1712671944352 14:14:21 kafka | [2024-04-09 14:12:25,591] INFO [Broker id=1] Leader __consumer_offsets-27 with topic id Some(JIxyITR5QGSmI5P2pGX22A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:14:21 policy-pap | [2024-04-09T14:12:24.352+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=564a2e6e-474f-4e32-b0d5-9fb32de5e450, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created 14:14:21 kafka | [2024-04-09 14:12:25,596] INFO [LogLoader partition=__consumer_offsets-42, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:14:21 policy-pap | [2024-04-09T14:12:24.352+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=32922d40-9f92-4f75-b434-f52361ae7b3f, alive=false, publisher=null]]: starting 14:14:21 kafka | [2024-04-09 14:12:25,597] INFO Created log for partition __consumer_offsets-42 in /var/lib/kafka/data/__consumer_offsets-42 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:14:21 policy-pap | [2024-04-09T14:12:24.353+00:00|INFO|ProducerConfig|main] ProducerConfig values: 14:14:21 kafka | [2024-04-09 14:12:25,597] INFO [Partition __consumer_offsets-42 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-42 (kafka.cluster.Partition) 14:14:21 policy-pap | acks = -1 14:14:21 kafka | [2024-04-09 14:12:25,597] INFO [Partition __consumer_offsets-42 broker=1] Log loaded for partition __consumer_offsets-42 with initial high watermark 0 (kafka.cluster.Partition) 14:14:21 policy-pap | auto.include.jmx.reporter = true 14:14:21 kafka | [2024-04-09 14:12:25,597] INFO [Broker id=1] Leader __consumer_offsets-42 with topic id Some(JIxyITR5QGSmI5P2pGX22A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:14:21 policy-pap | batch.size = 16384 14:14:21 kafka | [2024-04-09 14:12:25,602] INFO [LogLoader partition=__consumer_offsets-12, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:14:21 policy-pap | bootstrap.servers = [kafka:9092] 14:14:21 kafka | [2024-04-09 14:12:25,602] INFO Created log for partition __consumer_offsets-12 in /var/lib/kafka/data/__consumer_offsets-12 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:14:21 policy-pap | buffer.memory = 33554432 14:14:21 kafka | [2024-04-09 14:12:25,602] INFO [Partition __consumer_offsets-12 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-12 (kafka.cluster.Partition) 14:14:21 policy-pap | client.dns.lookup = use_all_dns_ips 14:14:21 kafka | [2024-04-09 14:12:25,602] INFO [Partition __consumer_offsets-12 broker=1] Log loaded for partition __consumer_offsets-12 with initial high watermark 0 (kafka.cluster.Partition) 14:14:21 policy-pap | client.id = producer-2 14:14:21 kafka | [2024-04-09 14:12:25,602] INFO [Broker id=1] Leader __consumer_offsets-12 with topic id Some(JIxyITR5QGSmI5P2pGX22A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:14:21 policy-pap | compression.type = none 14:14:21 kafka | [2024-04-09 14:12:25,607] INFO [LogLoader partition=__consumer_offsets-21, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:14:21 policy-pap | connections.max.idle.ms = 540000 14:14:21 kafka | [2024-04-09 14:12:25,608] INFO Created log for partition __consumer_offsets-21 in /var/lib/kafka/data/__consumer_offsets-21 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:14:21 policy-pap | delivery.timeout.ms = 120000 14:14:21 policy-pap | enable.idempotence = true 14:14:21 kafka | [2024-04-09 14:12:25,608] INFO [Partition __consumer_offsets-21 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-21 (kafka.cluster.Partition) 14:14:21 policy-pap | interceptor.classes = [] 14:14:21 kafka | [2024-04-09 14:12:25,608] INFO [Partition __consumer_offsets-21 broker=1] Log loaded for partition __consumer_offsets-21 with initial high watermark 0 (kafka.cluster.Partition) 14:14:21 policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer 14:14:21 kafka | [2024-04-09 14:12:25,609] INFO [Broker id=1] Leader __consumer_offsets-21 with topic id Some(JIxyITR5QGSmI5P2pGX22A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:14:21 policy-pap | linger.ms = 0 14:14:21 kafka | [2024-04-09 14:12:25,613] INFO [LogLoader partition=__consumer_offsets-36, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:14:21 policy-pap | max.block.ms = 60000 14:14:21 kafka | [2024-04-09 14:12:25,614] INFO Created log for partition __consumer_offsets-36 in /var/lib/kafka/data/__consumer_offsets-36 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:14:21 policy-pap | max.in.flight.requests.per.connection = 5 14:14:21 kafka | [2024-04-09 14:12:25,614] INFO [Partition __consumer_offsets-36 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-36 (kafka.cluster.Partition) 14:14:21 policy-pap | max.request.size = 1048576 14:14:21 kafka | [2024-04-09 14:12:25,614] INFO [Partition __consumer_offsets-36 broker=1] Log loaded for partition __consumer_offsets-36 with initial high watermark 0 (kafka.cluster.Partition) 14:14:21 policy-pap | metadata.max.age.ms = 300000 14:14:21 kafka | [2024-04-09 14:12:25,614] INFO [Broker id=1] Leader __consumer_offsets-36 with topic id Some(JIxyITR5QGSmI5P2pGX22A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:14:21 policy-pap | metadata.max.idle.ms = 300000 14:14:21 kafka | [2024-04-09 14:12:25,619] INFO [LogLoader partition=__consumer_offsets-6, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:14:21 policy-pap | metric.reporters = [] 14:14:21 kafka | [2024-04-09 14:12:25,620] INFO Created log for partition __consumer_offsets-6 in /var/lib/kafka/data/__consumer_offsets-6 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:14:21 policy-pap | metrics.num.samples = 2 14:14:21 kafka | [2024-04-09 14:12:25,620] INFO [Partition __consumer_offsets-6 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-6 (kafka.cluster.Partition) 14:14:21 policy-pap | metrics.recording.level = INFO 14:14:21 kafka | [2024-04-09 14:12:25,620] INFO [Partition __consumer_offsets-6 broker=1] Log loaded for partition __consumer_offsets-6 with initial high watermark 0 (kafka.cluster.Partition) 14:14:21 policy-pap | metrics.sample.window.ms = 30000 14:14:21 kafka | [2024-04-09 14:12:25,620] INFO [Broker id=1] Leader __consumer_offsets-6 with topic id Some(JIxyITR5QGSmI5P2pGX22A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:14:21 policy-pap | partitioner.adaptive.partitioning.enable = true 14:14:21 kafka | [2024-04-09 14:12:25,629] INFO [LogLoader partition=__consumer_offsets-43, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:14:21 policy-pap | partitioner.availability.timeout.ms = 0 14:14:21 kafka | [2024-04-09 14:12:25,630] INFO Created log for partition __consumer_offsets-43 in /var/lib/kafka/data/__consumer_offsets-43 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:14:21 policy-pap | partitioner.class = null 14:14:21 kafka | [2024-04-09 14:12:25,630] INFO [Partition __consumer_offsets-43 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-43 (kafka.cluster.Partition) 14:14:21 policy-pap | partitioner.ignore.keys = false 14:14:21 kafka | [2024-04-09 14:12:25,630] INFO [Partition __consumer_offsets-43 broker=1] Log loaded for partition __consumer_offsets-43 with initial high watermark 0 (kafka.cluster.Partition) 14:14:21 kafka | [2024-04-09 14:12:25,631] INFO [Broker id=1] Leader __consumer_offsets-43 with topic id Some(JIxyITR5QGSmI5P2pGX22A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:14:21 policy-pap | receive.buffer.bytes = 32768 14:14:21 kafka | [2024-04-09 14:12:25,638] INFO [LogLoader partition=__consumer_offsets-13, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:14:21 policy-pap | reconnect.backoff.max.ms = 1000 14:14:21 kafka | [2024-04-09 14:12:25,638] INFO Created log for partition __consumer_offsets-13 in /var/lib/kafka/data/__consumer_offsets-13 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:14:21 policy-pap | reconnect.backoff.ms = 50 14:14:21 kafka | [2024-04-09 14:12:25,638] INFO [Partition __consumer_offsets-13 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-13 (kafka.cluster.Partition) 14:14:21 policy-pap | request.timeout.ms = 30000 14:14:21 kafka | [2024-04-09 14:12:25,638] INFO [Partition __consumer_offsets-13 broker=1] Log loaded for partition __consumer_offsets-13 with initial high watermark 0 (kafka.cluster.Partition) 14:14:21 policy-pap | retries = 2147483647 14:14:21 kafka | [2024-04-09 14:12:25,639] INFO [Broker id=1] Leader __consumer_offsets-13 with topic id Some(JIxyITR5QGSmI5P2pGX22A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:14:21 policy-pap | retry.backoff.ms = 100 14:14:21 kafka | [2024-04-09 14:12:25,644] INFO [LogLoader partition=__consumer_offsets-28, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 14:14:21 policy-pap | sasl.client.callback.handler.class = null 14:14:21 kafka | [2024-04-09 14:12:25,645] INFO Created log for partition __consumer_offsets-28 in /var/lib/kafka/data/__consumer_offsets-28 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 14:14:21 policy-pap | sasl.jaas.config = null 14:14:21 kafka | [2024-04-09 14:12:25,645] INFO [Partition __consumer_offsets-28 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-28 (kafka.cluster.Partition) 14:14:21 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 14:14:21 kafka | [2024-04-09 14:12:25,645] INFO [Partition __consumer_offsets-28 broker=1] Log loaded for partition __consumer_offsets-28 with initial high watermark 0 (kafka.cluster.Partition) 14:14:21 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 14:14:21 kafka | [2024-04-09 14:12:25,646] INFO [Broker id=1] Leader __consumer_offsets-28 with topic id Some(JIxyITR5QGSmI5P2pGX22A) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 14:14:21 policy-pap | sasl.kerberos.service.name = null 14:14:21 kafka | [2024-04-09 14:12:25,666] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-3 (state.change.logger) 14:14:21 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 14:14:21 kafka | [2024-04-09 14:12:25,666] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-18 (state.change.logger) 14:14:21 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 14:14:21 kafka | [2024-04-09 14:12:25,666] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-41 (state.change.logger) 14:14:21 policy-pap | sasl.login.callback.handler.class = null 14:14:21 kafka | [2024-04-09 14:12:25,666] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-10 (state.change.logger) 14:14:21 policy-pap | sasl.login.class = null 14:14:21 kafka | [2024-04-09 14:12:25,666] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-33 (state.change.logger) 14:14:21 policy-pap | sasl.login.connect.timeout.ms = null 14:14:21 kafka | [2024-04-09 14:12:25,666] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-48 (state.change.logger) 14:14:21 policy-pap | sasl.login.read.timeout.ms = null 14:14:21 kafka | [2024-04-09 14:12:25,666] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-19 (state.change.logger) 14:14:21 policy-pap | sasl.login.refresh.buffer.seconds = 300 14:14:21 policy-pap | sasl.login.refresh.min.period.seconds = 60 14:14:21 kafka | [2024-04-09 14:12:25,666] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-34 (state.change.logger) 14:14:21 policy-pap | sasl.login.refresh.window.factor = 0.8 14:14:21 kafka | [2024-04-09 14:12:25,666] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-4 (state.change.logger) 14:14:21 policy-pap | sasl.login.refresh.window.jitter = 0.05 14:14:21 kafka | [2024-04-09 14:12:25,666] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-11 (state.change.logger) 14:14:21 policy-pap | sasl.login.retry.backoff.max.ms = 10000 14:14:21 kafka | [2024-04-09 14:12:25,666] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-26 (state.change.logger) 14:14:21 policy-pap | sasl.login.retry.backoff.ms = 100 14:14:21 kafka | [2024-04-09 14:12:25,666] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-49 (state.change.logger) 14:14:21 policy-pap | sasl.mechanism = GSSAPI 14:14:21 kafka | [2024-04-09 14:12:25,666] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-39 (state.change.logger) 14:14:21 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 14:14:21 kafka | [2024-04-09 14:12:25,667] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-9 (state.change.logger) 14:14:21 policy-pap | sasl.oauthbearer.expected.audience = null 14:14:21 kafka | [2024-04-09 14:12:25,667] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-24 (state.change.logger) 14:14:21 policy-pap | sasl.oauthbearer.expected.issuer = null 14:14:21 kafka | [2024-04-09 14:12:25,667] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-31 (state.change.logger) 14:14:21 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 14:14:21 kafka | [2024-04-09 14:12:25,667] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-46 (state.change.logger) 14:14:21 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 14:14:21 kafka | [2024-04-09 14:12:25,667] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-1 (state.change.logger) 14:14:21 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 14:14:21 kafka | [2024-04-09 14:12:25,667] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-16 (state.change.logger) 14:14:21 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 14:14:21 kafka | [2024-04-09 14:12:25,667] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-2 (state.change.logger) 14:14:21 policy-pap | sasl.oauthbearer.scope.claim.name = scope 14:14:21 kafka | [2024-04-09 14:12:25,667] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-25 (state.change.logger) 14:14:21 policy-pap | sasl.oauthbearer.sub.claim.name = sub 14:14:21 kafka | [2024-04-09 14:12:25,667] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-40 (state.change.logger) 14:14:21 policy-pap | sasl.oauthbearer.token.endpoint.url = null 14:14:21 kafka | [2024-04-09 14:12:25,668] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-47 (state.change.logger) 14:14:21 policy-pap | security.protocol = PLAINTEXT 14:14:21 kafka | [2024-04-09 14:12:25,668] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-17 (state.change.logger) 14:14:21 policy-pap | security.providers = null 14:14:21 kafka | [2024-04-09 14:12:25,668] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-32 (state.change.logger) 14:14:21 policy-pap | send.buffer.bytes = 131072 14:14:21 kafka | [2024-04-09 14:12:25,668] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-37 (state.change.logger) 14:14:21 policy-pap | socket.connection.setup.timeout.max.ms = 30000 14:14:21 kafka | [2024-04-09 14:12:25,668] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-7 (state.change.logger) 14:14:21 policy-pap | socket.connection.setup.timeout.ms = 10000 14:14:21 kafka | [2024-04-09 14:12:25,668] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-22 (state.change.logger) 14:14:21 policy-pap | ssl.cipher.suites = null 14:14:21 kafka | [2024-04-09 14:12:25,668] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-29 (state.change.logger) 14:14:21 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 14:14:21 kafka | [2024-04-09 14:12:25,668] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-44 (state.change.logger) 14:14:21 policy-pap | ssl.endpoint.identification.algorithm = https 14:14:21 kafka | [2024-04-09 14:12:25,668] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-14 (state.change.logger) 14:14:21 policy-pap | ssl.engine.factory.class = null 14:14:21 kafka | [2024-04-09 14:12:25,668] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-23 (state.change.logger) 14:14:21 policy-pap | ssl.key.password = null 14:14:21 kafka | [2024-04-09 14:12:25,668] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-38 (state.change.logger) 14:14:21 policy-pap | ssl.keymanager.algorithm = SunX509 14:14:21 kafka | [2024-04-09 14:12:25,668] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-8 (state.change.logger) 14:14:21 policy-pap | ssl.keystore.certificate.chain = null 14:14:21 kafka | [2024-04-09 14:12:25,668] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) 14:14:21 policy-pap | ssl.keystore.key = null 14:14:21 kafka | [2024-04-09 14:12:25,668] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-45 (state.change.logger) 14:14:21 policy-pap | ssl.keystore.location = null 14:14:21 kafka | [2024-04-09 14:12:25,668] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-15 (state.change.logger) 14:14:21 policy-pap | ssl.keystore.password = null 14:14:21 kafka | [2024-04-09 14:12:25,668] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-30 (state.change.logger) 14:14:21 policy-pap | ssl.keystore.type = JKS 14:14:21 kafka | [2024-04-09 14:12:25,668] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-0 (state.change.logger) 14:14:21 policy-pap | ssl.protocol = TLSv1.3 14:14:21 kafka | [2024-04-09 14:12:25,668] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-35 (state.change.logger) 14:14:21 policy-pap | ssl.provider = null 14:14:21 kafka | [2024-04-09 14:12:25,668] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-5 (state.change.logger) 14:14:21 policy-pap | ssl.secure.random.implementation = null 14:14:21 kafka | [2024-04-09 14:12:25,668] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-20 (state.change.logger) 14:14:21 policy-pap | ssl.trustmanager.algorithm = PKIX 14:14:21 kafka | [2024-04-09 14:12:25,668] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-27 (state.change.logger) 14:14:21 policy-pap | ssl.truststore.certificates = null 14:14:21 kafka | [2024-04-09 14:12:25,668] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-42 (state.change.logger) 14:14:21 policy-pap | ssl.truststore.location = null 14:14:21 policy-pap | ssl.truststore.password = null 14:14:21 kafka | [2024-04-09 14:12:25,668] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-12 (state.change.logger) 14:14:21 policy-pap | ssl.truststore.type = JKS 14:14:21 kafka | [2024-04-09 14:12:25,668] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-21 (state.change.logger) 14:14:21 policy-pap | transaction.timeout.ms = 60000 14:14:21 kafka | [2024-04-09 14:12:25,668] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-36 (state.change.logger) 14:14:21 policy-pap | transactional.id = null 14:14:21 kafka | [2024-04-09 14:12:25,668] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-6 (state.change.logger) 14:14:21 policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer 14:14:21 kafka | [2024-04-09 14:12:25,668] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-43 (state.change.logger) 14:14:21 policy-pap | 14:14:21 kafka | [2024-04-09 14:12:25,669] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-13 (state.change.logger) 14:14:21 policy-pap | [2024-04-09T14:12:24.354+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-2] Instantiated an idempotent producer. 14:14:21 kafka | [2024-04-09 14:12:25,669] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-28 (state.change.logger) 14:14:21 policy-pap | [2024-04-09T14:12:24.356+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 14:14:21 kafka | [2024-04-09 14:12:25,675] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 3 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:14:21 policy-pap | [2024-04-09T14:12:24.356+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 14:14:21 kafka | [2024-04-09 14:12:25,677] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-3 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:14:21 policy-pap | [2024-04-09T14:12:24.356+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1712671944356 14:14:21 kafka | [2024-04-09 14:12:25,678] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 18 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:14:21 policy-pap | [2024-04-09T14:12:24.356+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=32922d40-9f92-4f75-b434-f52361ae7b3f, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created 14:14:21 kafka | [2024-04-09 14:12:25,678] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-18 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:14:21 policy-pap | [2024-04-09T14:12:24.356+00:00|INFO|ServiceManager|main] Policy PAP starting PAP Activator 14:14:21 kafka | [2024-04-09 14:12:25,678] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 41 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:14:21 policy-pap | [2024-04-09T14:12:24.356+00:00|INFO|ServiceManager|main] Policy PAP starting PDP publisher 14:14:21 kafka | [2024-04-09 14:12:25,678] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-41 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:14:21 policy-pap | [2024-04-09T14:12:24.357+00:00|INFO|ServiceManager|main] Policy PAP starting Policy Notification publisher 14:14:21 kafka | [2024-04-09 14:12:25,678] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 10 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:14:21 policy-pap | [2024-04-09T14:12:24.358+00:00|INFO|ServiceManager|main] Policy PAP starting PDP update timers 14:14:21 kafka | [2024-04-09 14:12:25,678] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-10 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:14:21 policy-pap | [2024-04-09T14:12:24.361+00:00|INFO|ServiceManager|main] Policy PAP starting PDP state-change timers 14:14:21 kafka | [2024-04-09 14:12:25,678] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 33 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:14:21 policy-pap | [2024-04-09T14:12:24.361+00:00|INFO|TimerManager|Thread-9] timer manager update started 14:14:21 kafka | [2024-04-09 14:12:25,678] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-33 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:14:21 policy-pap | [2024-04-09T14:12:24.363+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification lock 14:14:21 kafka | [2024-04-09 14:12:25,679] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 48 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:14:21 policy-pap | [2024-04-09T14:12:24.363+00:00|INFO|TimerManager|Thread-10] timer manager state-change started 14:14:21 kafka | [2024-04-09 14:12:25,679] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-48 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:14:21 policy-pap | [2024-04-09T14:12:24.363+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification requests 14:14:21 kafka | [2024-04-09 14:12:25,679] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 19 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:14:21 policy-pap | [2024-04-09T14:12:24.364+00:00|INFO|ServiceManager|main] Policy PAP starting PDP expiration timer 14:14:21 kafka | [2024-04-09 14:12:25,679] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-19 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:14:21 policy-pap | [2024-04-09T14:12:24.365+00:00|INFO|ServiceManager|main] Policy PAP started 14:14:21 kafka | [2024-04-09 14:12:25,679] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 34 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:14:21 policy-pap | [2024-04-09T14:12:24.366+00:00|INFO|PolicyPapApplication|main] Started PolicyPapApplication in 10.357 seconds (process running for 11.01) 14:14:21 kafka | [2024-04-09 14:12:25,679] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-34 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:14:21 policy-pap | [2024-04-09T14:12:24.782+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-8886bf5a-38da-4c7c-af7d-ca09814a22ad-3, groupId=8886bf5a-38da-4c7c-af7d-ca09814a22ad] Error while fetching metadata with correlation id 2 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} 14:14:21 kafka | [2024-04-09 14:12:25,679] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 4 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:14:21 policy-pap | [2024-04-09T14:12:24.784+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-8886bf5a-38da-4c7c-af7d-ca09814a22ad-3, groupId=8886bf5a-38da-4c7c-af7d-ca09814a22ad] Cluster ID: TupwFhGQQjGmvCIddVeH4w 14:14:21 kafka | [2024-04-09 14:12:25,679] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-4 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:14:21 policy-pap | [2024-04-09T14:12:24.784+00:00|INFO|Metadata|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] Cluster ID: TupwFhGQQjGmvCIddVeH4w 14:14:21 kafka | [2024-04-09 14:12:25,679] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 11 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:14:21 policy-pap | [2024-04-09T14:12:24.785+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: TupwFhGQQjGmvCIddVeH4w 14:14:21 kafka | [2024-04-09 14:12:25,679] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-11 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:14:21 policy-pap | [2024-04-09T14:12:24.823+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 2 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 14:14:21 kafka | [2024-04-09 14:12:25,679] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 26 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:14:21 policy-pap | [2024-04-09T14:12:24.823+00:00|INFO|Metadata|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Cluster ID: TupwFhGQQjGmvCIddVeH4w 14:14:21 kafka | [2024-04-09 14:12:25,679] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-26 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:14:21 policy-pap | [2024-04-09T14:12:24.893+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] ProducerId set to 0 with epoch 0 14:14:21 kafka | [2024-04-09 14:12:25,679] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 49 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:14:21 policy-pap | [2024-04-09T14:12:24.893+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-8886bf5a-38da-4c7c-af7d-ca09814a22ad-3, groupId=8886bf5a-38da-4c7c-af7d-ca09814a22ad] Error while fetching metadata with correlation id 4 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 14:14:21 kafka | [2024-04-09 14:12:25,680] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-49 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:14:21 policy-pap | [2024-04-09T14:12:24.897+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 1 with epoch 0 14:14:21 kafka | [2024-04-09 14:12:25,680] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 39 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:14:21 policy-pap | [2024-04-09T14:12:24.964+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 4 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 14:14:21 kafka | [2024-04-09 14:12:25,680] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-39 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:14:21 policy-pap | [2024-04-09T14:12:25.008+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-8886bf5a-38da-4c7c-af7d-ca09814a22ad-3, groupId=8886bf5a-38da-4c7c-af7d-ca09814a22ad] Error while fetching metadata with correlation id 6 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 14:14:21 kafka | [2024-04-09 14:12:25,680] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 9 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:14:21 policy-pap | [2024-04-09T14:12:25.084+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 6 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 14:14:21 kafka | [2024-04-09 14:12:25,680] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-9 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:14:21 kafka | [2024-04-09 14:12:25,680] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 24 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:14:21 kafka | [2024-04-09 14:12:25,680] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-24 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:14:21 policy-pap | [2024-04-09T14:12:25.116+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-8886bf5a-38da-4c7c-af7d-ca09814a22ad-3, groupId=8886bf5a-38da-4c7c-af7d-ca09814a22ad] Error while fetching metadata with correlation id 8 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 14:14:21 kafka | [2024-04-09 14:12:25,680] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 31 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:14:21 policy-pap | [2024-04-09T14:12:25.189+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 8 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 14:14:21 kafka | [2024-04-09 14:12:25,680] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-31 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:14:21 policy-pap | [2024-04-09T14:12:25.235+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-8886bf5a-38da-4c7c-af7d-ca09814a22ad-3, groupId=8886bf5a-38da-4c7c-af7d-ca09814a22ad] Error while fetching metadata with correlation id 10 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 14:14:21 kafka | [2024-04-09 14:12:25,680] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 46 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:14:21 policy-pap | [2024-04-09T14:12:25.294+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 10 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 14:14:21 kafka | [2024-04-09 14:12:25,680] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-46 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:14:21 policy-pap | [2024-04-09T14:12:25.341+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-8886bf5a-38da-4c7c-af7d-ca09814a22ad-3, groupId=8886bf5a-38da-4c7c-af7d-ca09814a22ad] Error while fetching metadata with correlation id 12 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 14:14:21 policy-pap | [2024-04-09T14:12:25.411+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 12 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 14:14:21 policy-pap | [2024-04-09T14:12:25.450+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-8886bf5a-38da-4c7c-af7d-ca09814a22ad-3, groupId=8886bf5a-38da-4c7c-af7d-ca09814a22ad] Error while fetching metadata with correlation id 14 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 14:14:21 kafka | [2024-04-09 14:12:25,680] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 1 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:14:21 policy-pap | [2024-04-09T14:12:25.516+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 14 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 14:14:21 kafka | [2024-04-09 14:12:25,680] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-1 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:14:21 policy-pap | [2024-04-09T14:12:25.557+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-8886bf5a-38da-4c7c-af7d-ca09814a22ad-3, groupId=8886bf5a-38da-4c7c-af7d-ca09814a22ad] Error while fetching metadata with correlation id 16 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 14:14:21 kafka | [2024-04-09 14:12:25,681] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 16 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:14:21 policy-pap | [2024-04-09T14:12:25.623+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 16 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 14:14:21 kafka | [2024-04-09 14:12:25,681] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-16 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:14:21 policy-pap | [2024-04-09T14:12:25.668+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-8886bf5a-38da-4c7c-af7d-ca09814a22ad-3, groupId=8886bf5a-38da-4c7c-af7d-ca09814a22ad] Error while fetching metadata with correlation id 18 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 14:14:21 kafka | [2024-04-09 14:12:25,681] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 2 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:14:21 policy-pap | [2024-04-09T14:12:25.737+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) 14:14:21 kafka | [2024-04-09 14:12:25,681] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-2 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:14:21 policy-pap | [2024-04-09T14:12:25.744+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group 14:14:21 kafka | [2024-04-09 14:12:25,681] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 25 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:14:21 policy-pap | [2024-04-09T14:12:25.774+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-8886bf5a-38da-4c7c-af7d-ca09814a22ad-3, groupId=8886bf5a-38da-4c7c-af7d-ca09814a22ad] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) 14:14:21 kafka | [2024-04-09 14:12:25,681] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-25 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:14:21 policy-pap | [2024-04-09T14:12:25.775+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: need to re-join with the given member-id: consumer-policy-pap-4-cc6c70d5-9f7a-4a85-8bbc-f691b3908254 14:14:21 kafka | [2024-04-09 14:12:25,681] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 40 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:14:21 policy-pap | [2024-04-09T14:12:25.775+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) 14:14:21 kafka | [2024-04-09 14:12:25,681] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-40 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:14:21 policy-pap | [2024-04-09T14:12:25.776+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group 14:14:21 kafka | [2024-04-09 14:12:25,681] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 47 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:14:21 policy-pap | [2024-04-09T14:12:25.776+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-8886bf5a-38da-4c7c-af7d-ca09814a22ad-3, groupId=8886bf5a-38da-4c7c-af7d-ca09814a22ad] (Re-)joining group 14:14:21 kafka | [2024-04-09 14:12:25,681] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-47 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:14:21 policy-pap | [2024-04-09T14:12:25.782+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-8886bf5a-38da-4c7c-af7d-ca09814a22ad-3, groupId=8886bf5a-38da-4c7c-af7d-ca09814a22ad] Request joining group due to: need to re-join with the given member-id: consumer-8886bf5a-38da-4c7c-af7d-ca09814a22ad-3-937938bf-de0f-412f-b074-806532e1f873 14:14:21 kafka | [2024-04-09 14:12:25,681] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 17 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:14:21 policy-pap | [2024-04-09T14:12:25.782+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-8886bf5a-38da-4c7c-af7d-ca09814a22ad-3, groupId=8886bf5a-38da-4c7c-af7d-ca09814a22ad] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) 14:14:21 kafka | [2024-04-09 14:12:25,681] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-17 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:14:21 policy-pap | [2024-04-09T14:12:25.782+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-8886bf5a-38da-4c7c-af7d-ca09814a22ad-3, groupId=8886bf5a-38da-4c7c-af7d-ca09814a22ad] (Re-)joining group 14:14:21 kafka | [2024-04-09 14:12:25,682] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 32 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:14:21 policy-pap | [2024-04-09T14:12:28.810+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully joined group with generation Generation{generationId=1, memberId='consumer-policy-pap-4-cc6c70d5-9f7a-4a85-8bbc-f691b3908254', protocol='range'} 14:14:21 kafka | [2024-04-09 14:12:25,682] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-32 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:14:21 policy-pap | [2024-04-09T14:12:28.812+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-8886bf5a-38da-4c7c-af7d-ca09814a22ad-3, groupId=8886bf5a-38da-4c7c-af7d-ca09814a22ad] Successfully joined group with generation Generation{generationId=1, memberId='consumer-8886bf5a-38da-4c7c-af7d-ca09814a22ad-3-937938bf-de0f-412f-b074-806532e1f873', protocol='range'} 14:14:21 kafka | [2024-04-09 14:12:25,682] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 37 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:14:21 policy-pap | [2024-04-09T14:12:28.819+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-8886bf5a-38da-4c7c-af7d-ca09814a22ad-3, groupId=8886bf5a-38da-4c7c-af7d-ca09814a22ad] Finished assignment for group at generation 1: {consumer-8886bf5a-38da-4c7c-af7d-ca09814a22ad-3-937938bf-de0f-412f-b074-806532e1f873=Assignment(partitions=[policy-pdp-pap-0])} 14:14:21 kafka | [2024-04-09 14:12:25,682] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-37 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:14:21 policy-pap | [2024-04-09T14:12:28.819+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Finished assignment for group at generation 1: {consumer-policy-pap-4-cc6c70d5-9f7a-4a85-8bbc-f691b3908254=Assignment(partitions=[policy-pdp-pap-0])} 14:14:21 kafka | [2024-04-09 14:12:25,682] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 7 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:14:21 policy-pap | [2024-04-09T14:12:28.848+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully synced group in generation Generation{generationId=1, memberId='consumer-policy-pap-4-cc6c70d5-9f7a-4a85-8bbc-f691b3908254', protocol='range'} 14:14:21 kafka | [2024-04-09 14:12:25,682] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-7 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:14:21 policy-pap | [2024-04-09T14:12:28.849+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) 14:14:21 kafka | [2024-04-09 14:12:25,682] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 22 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:14:21 policy-pap | [2024-04-09T14:12:28.850+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-8886bf5a-38da-4c7c-af7d-ca09814a22ad-3, groupId=8886bf5a-38da-4c7c-af7d-ca09814a22ad] Successfully synced group in generation Generation{generationId=1, memberId='consumer-8886bf5a-38da-4c7c-af7d-ca09814a22ad-3-937938bf-de0f-412f-b074-806532e1f873', protocol='range'} 14:14:21 kafka | [2024-04-09 14:12:25,682] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-22 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:14:21 kafka | [2024-04-09 14:12:25,682] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 29 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:14:21 kafka | [2024-04-09 14:12:25,682] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-29 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:14:21 policy-pap | [2024-04-09T14:12:28.850+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-8886bf5a-38da-4c7c-af7d-ca09814a22ad-3, groupId=8886bf5a-38da-4c7c-af7d-ca09814a22ad] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) 14:14:21 kafka | [2024-04-09 14:12:25,682] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 44 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:14:21 policy-pap | [2024-04-09T14:12:28.853+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Adding newly assigned partitions: policy-pdp-pap-0 14:14:21 kafka | [2024-04-09 14:12:25,682] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-44 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:14:21 policy-pap | [2024-04-09T14:12:28.854+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-8886bf5a-38da-4c7c-af7d-ca09814a22ad-3, groupId=8886bf5a-38da-4c7c-af7d-ca09814a22ad] Adding newly assigned partitions: policy-pdp-pap-0 14:14:21 kafka | [2024-04-09 14:12:25,682] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 14 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:14:21 policy-pap | [2024-04-09T14:12:28.869+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-8886bf5a-38da-4c7c-af7d-ca09814a22ad-3, groupId=8886bf5a-38da-4c7c-af7d-ca09814a22ad] Found no committed offset for partition policy-pdp-pap-0 14:14:21 kafka | [2024-04-09 14:12:25,682] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-14 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:14:21 policy-pap | [2024-04-09T14:12:28.871+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Found no committed offset for partition policy-pdp-pap-0 14:14:21 kafka | [2024-04-09 14:12:25,683] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 23 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:14:21 policy-pap | [2024-04-09T14:12:28.891+00:00|INFO|SubscriptionState|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. 14:14:21 kafka | [2024-04-09 14:12:25,683] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-23 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:14:21 policy-pap | [2024-04-09T14:12:28.891+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-8886bf5a-38da-4c7c-af7d-ca09814a22ad-3, groupId=8886bf5a-38da-4c7c-af7d-ca09814a22ad] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. 14:14:21 kafka | [2024-04-09 14:12:25,683] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 38 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:14:21 policy-pap | [2024-04-09T14:12:30.263+00:00|INFO|[/policy/pap/v1]|http-nio-6969-exec-3] Initializing Spring DispatcherServlet 'dispatcherServlet' 14:14:21 kafka | [2024-04-09 14:12:25,683] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-38 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:14:21 policy-pap | [2024-04-09T14:12:30.263+00:00|INFO|DispatcherServlet|http-nio-6969-exec-3] Initializing Servlet 'dispatcherServlet' 14:14:21 kafka | [2024-04-09 14:12:25,683] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 8 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:14:21 policy-pap | [2024-04-09T14:12:30.264+00:00|INFO|DispatcherServlet|http-nio-6969-exec-3] Completed initialization in 1 ms 14:14:21 kafka | [2024-04-09 14:12:25,683] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-8 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:14:21 policy-pap | [2024-04-09T14:12:46.235+00:00|INFO|OrderedServiceImpl|KAFKA-source-policy-pdp-pap] ***** OrderedServiceImpl implementers: 14:14:21 kafka | [2024-04-09 14:12:25,683] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 45 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:14:21 policy-pap | [] 14:14:21 kafka | [2024-04-09 14:12:25,683] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-45 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:14:21 policy-pap | [2024-04-09T14:12:46.236+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 14:14:21 kafka | [2024-04-09 14:12:25,683] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 15 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:14:21 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"f71a83e0-8991-48b5-bf16-0f80efc2e25f","timestampMs":1712671966197,"name":"apex-87d34be7-6039-47df-ad80-62271f3f875b","pdpGroup":"defaultGroup"} 14:14:21 kafka | [2024-04-09 14:12:25,683] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-15 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:14:21 policy-pap | [2024-04-09T14:12:46.236+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 14:14:21 kafka | [2024-04-09 14:12:25,683] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 30 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:14:21 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"f71a83e0-8991-48b5-bf16-0f80efc2e25f","timestampMs":1712671966197,"name":"apex-87d34be7-6039-47df-ad80-62271f3f875b","pdpGroup":"defaultGroup"} 14:14:21 kafka | [2024-04-09 14:12:25,683] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-30 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:14:21 policy-pap | [2024-04-09T14:12:46.246+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus 14:14:21 kafka | [2024-04-09 14:12:25,684] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 0 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:14:21 policy-pap | [2024-04-09T14:12:46.336+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-87d34be7-6039-47df-ad80-62271f3f875b PdpUpdate starting 14:14:21 kafka | [2024-04-09 14:12:25,684] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-0 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:14:21 policy-pap | [2024-04-09T14:12:46.336+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-87d34be7-6039-47df-ad80-62271f3f875b PdpUpdate starting listener 14:14:21 kafka | [2024-04-09 14:12:25,684] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 35 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:14:21 policy-pap | [2024-04-09T14:12:46.336+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-87d34be7-6039-47df-ad80-62271f3f875b PdpUpdate starting timer 14:14:21 policy-pap | [2024-04-09T14:12:46.337+00:00|INFO|TimerManager|KAFKA-source-policy-heartbeat] update timer registered Timer [name=f35c2eaa-9447-4409-bc81-28e3583921e3, expireMs=1712671996337] 14:14:21 kafka | [2024-04-09 14:12:25,684] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-35 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:14:21 policy-pap | [2024-04-09T14:12:46.338+00:00|INFO|TimerManager|Thread-9] update timer waiting 29999ms Timer [name=f35c2eaa-9447-4409-bc81-28e3583921e3, expireMs=1712671996337] 14:14:21 kafka | [2024-04-09 14:12:25,684] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 5 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:14:21 policy-pap | [2024-04-09T14:12:46.339+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-87d34be7-6039-47df-ad80-62271f3f875b PdpUpdate starting enqueue 14:14:21 kafka | [2024-04-09 14:12:25,684] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-5 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:14:21 policy-pap | [2024-04-09T14:12:46.339+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-87d34be7-6039-47df-ad80-62271f3f875b PdpUpdate started 14:14:21 kafka | [2024-04-09 14:12:25,684] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 20 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:14:21 policy-pap | [2024-04-09T14:12:46.340+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 14:14:21 kafka | [2024-04-09 14:12:25,684] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-20 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:14:21 policy-pap | {"source":"pap-d567b5c7-abc8-4867-b3e8-f75d8faeecf1","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"f35c2eaa-9447-4409-bc81-28e3583921e3","timestampMs":1712671966320,"name":"apex-87d34be7-6039-47df-ad80-62271f3f875b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 14:14:21 kafka | [2024-04-09 14:12:25,684] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 27 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:14:21 policy-pap | [2024-04-09T14:12:46.370+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 14:14:21 kafka | [2024-04-09 14:12:25,684] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-27 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:14:21 policy-pap | {"source":"pap-d567b5c7-abc8-4867-b3e8-f75d8faeecf1","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"f35c2eaa-9447-4409-bc81-28e3583921e3","timestampMs":1712671966320,"name":"apex-87d34be7-6039-47df-ad80-62271f3f875b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 14:14:21 kafka | [2024-04-09 14:12:25,684] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 42 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:14:21 policy-pap | [2024-04-09T14:12:46.370+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE 14:14:21 kafka | [2024-04-09 14:12:25,684] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-42 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:14:21 policy-pap | [2024-04-09T14:12:46.379+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 14:14:21 kafka | [2024-04-09 14:12:25,685] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 12 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:14:21 policy-pap | {"source":"pap-d567b5c7-abc8-4867-b3e8-f75d8faeecf1","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"f35c2eaa-9447-4409-bc81-28e3583921e3","timestampMs":1712671966320,"name":"apex-87d34be7-6039-47df-ad80-62271f3f875b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 14:14:21 kafka | [2024-04-09 14:12:25,685] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-12 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:14:21 policy-pap | [2024-04-09T14:12:46.380+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE 14:14:21 kafka | [2024-04-09 14:12:25,685] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 21 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:14:21 policy-pap | [2024-04-09T14:12:46.393+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 14:14:21 kafka | [2024-04-09 14:12:25,685] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-21 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:14:21 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"58bba064-d794-42f0-bfa3-6b19bdabadb3","timestampMs":1712671966382,"name":"apex-87d34be7-6039-47df-ad80-62271f3f875b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 14:14:21 kafka | [2024-04-09 14:12:25,683] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-3 in 5 milliseconds for epoch 0, of which 1 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:14:21 policy-pap | [2024-04-09T14:12:46.394+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus 14:14:21 kafka | [2024-04-09 14:12:25,685] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 36 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:14:21 policy-pap | [2024-04-09T14:12:46.394+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 14:14:21 kafka | [2024-04-09 14:12:25,685] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-36 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:14:21 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"f35c2eaa-9447-4409-bc81-28e3583921e3","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"20e7f5d3-bd9e-4c55-ba54-348ec7aba681","timestampMs":1712671966382,"name":"apex-87d34be7-6039-47df-ad80-62271f3f875b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 14:14:21 kafka | [2024-04-09 14:12:25,685] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 6 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:14:21 policy-pap | [2024-04-09T14:12:46.395+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 14:14:21 kafka | [2024-04-09 14:12:25,685] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-18 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:14:21 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"58bba064-d794-42f0-bfa3-6b19bdabadb3","timestampMs":1712671966382,"name":"apex-87d34be7-6039-47df-ad80-62271f3f875b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 14:14:21 kafka | [2024-04-09 14:12:25,686] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-6 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:14:21 policy-pap | [2024-04-09T14:12:46.395+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-87d34be7-6039-47df-ad80-62271f3f875b PdpUpdate stopping 14:14:21 kafka | [2024-04-09 14:12:25,686] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 43 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:14:21 policy-pap | [2024-04-09T14:12:46.395+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-87d34be7-6039-47df-ad80-62271f3f875b PdpUpdate stopping enqueue 14:14:21 kafka | [2024-04-09 14:12:25,686] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-43 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:14:21 policy-pap | [2024-04-09T14:12:46.395+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-87d34be7-6039-47df-ad80-62271f3f875b PdpUpdate stopping timer 14:14:21 kafka | [2024-04-09 14:12:25,686] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-41 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:14:21 kafka | [2024-04-09 14:12:25,686] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 13 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:14:21 policy-pap | [2024-04-09T14:12:46.395+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=f35c2eaa-9447-4409-bc81-28e3583921e3, expireMs=1712671996337] 14:14:21 kafka | [2024-04-09 14:12:25,686] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-13 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:14:21 policy-pap | [2024-04-09T14:12:46.396+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-87d34be7-6039-47df-ad80-62271f3f875b PdpUpdate stopping listener 14:14:21 kafka | [2024-04-09 14:12:25,686] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-10 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:14:21 policy-pap | [2024-04-09T14:12:46.396+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-87d34be7-6039-47df-ad80-62271f3f875b PdpUpdate stopped 14:14:21 kafka | [2024-04-09 14:12:25,686] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 28 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 14:14:21 policy-pap | [2024-04-09T14:12:46.402+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-87d34be7-6039-47df-ad80-62271f3f875b PdpUpdate successful 14:14:21 kafka | [2024-04-09 14:12:25,686] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-28 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 14:14:21 policy-pap | [2024-04-09T14:12:46.402+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-87d34be7-6039-47df-ad80-62271f3f875b start publishing next request 14:14:21 kafka | [2024-04-09 14:12:25,686] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-33 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:14:21 policy-pap | [2024-04-09T14:12:46.402+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-87d34be7-6039-47df-ad80-62271f3f875b PdpStateChange starting 14:14:21 kafka | [2024-04-09 14:12:25,686] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-48 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:14:21 policy-pap | [2024-04-09T14:12:46.402+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-87d34be7-6039-47df-ad80-62271f3f875b PdpStateChange starting listener 14:14:21 kafka | [2024-04-09 14:12:25,686] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-19 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:14:21 policy-pap | [2024-04-09T14:12:46.403+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-87d34be7-6039-47df-ad80-62271f3f875b PdpStateChange starting timer 14:14:21 kafka | [2024-04-09 14:12:25,686] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-34 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:14:21 policy-pap | [2024-04-09T14:12:46.403+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer registered Timer [name=cb095d6b-1806-48f9-af91-a9c5f08d2e3b, expireMs=1712671996403] 14:14:21 kafka | [2024-04-09 14:12:25,687] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-4 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:14:21 policy-pap | [2024-04-09T14:12:46.403+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-87d34be7-6039-47df-ad80-62271f3f875b PdpStateChange starting enqueue 14:14:21 kafka | [2024-04-09 14:12:25,687] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-11 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:14:21 policy-pap | [2024-04-09T14:12:46.403+00:00|INFO|TimerManager|Thread-10] state-change timer waiting 30000ms Timer [name=cb095d6b-1806-48f9-af91-a9c5f08d2e3b, expireMs=1712671996403] 14:14:21 kafka | [2024-04-09 14:12:25,687] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-26 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:14:21 policy-pap | [2024-04-09T14:12:46.403+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-87d34be7-6039-47df-ad80-62271f3f875b PdpStateChange started 14:14:21 kafka | [2024-04-09 14:12:25,687] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-49 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:14:21 policy-pap | [2024-04-09T14:12:46.403+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 14:14:21 kafka | [2024-04-09 14:12:25,687] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-39 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:14:21 policy-pap | {"source":"pap-d567b5c7-abc8-4867-b3e8-f75d8faeecf1","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"cb095d6b-1806-48f9-af91-a9c5f08d2e3b","timestampMs":1712671966320,"name":"apex-87d34be7-6039-47df-ad80-62271f3f875b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 14:14:21 policy-pap | [2024-04-09T14:12:46.438+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 14:14:21 kafka | [2024-04-09 14:12:25,687] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-9 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:14:21 policy-pap | {"source":"pap-d567b5c7-abc8-4867-b3e8-f75d8faeecf1","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"cb095d6b-1806-48f9-af91-a9c5f08d2e3b","timestampMs":1712671966320,"name":"apex-87d34be7-6039-47df-ad80-62271f3f875b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 14:14:21 kafka | [2024-04-09 14:12:25,687] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-24 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:14:21 policy-pap | [2024-04-09T14:12:46.438+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATE_CHANGE 14:14:21 kafka | [2024-04-09 14:12:25,688] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-31 in 8 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:14:21 policy-pap | [2024-04-09T14:12:46.441+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 14:14:21 kafka | [2024-04-09 14:12:25,688] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-46 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:14:21 kafka | [2024-04-09 14:12:25,688] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-1 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:14:21 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"cb095d6b-1806-48f9-af91-a9c5f08d2e3b","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"d74c0d11-396a-45ee-aca8-c28faceef757","timestampMs":1712671966414,"name":"apex-87d34be7-6039-47df-ad80-62271f3f875b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 14:14:21 kafka | [2024-04-09 14:12:25,688] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-16 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:14:21 policy-pap | [2024-04-09T14:12:46.453+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 14:14:21 kafka | [2024-04-09 14:12:25,688] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-2 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:14:21 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"f35c2eaa-9447-4409-bc81-28e3583921e3","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"20e7f5d3-bd9e-4c55-ba54-348ec7aba681","timestampMs":1712671966382,"name":"apex-87d34be7-6039-47df-ad80-62271f3f875b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 14:14:21 kafka | [2024-04-09 14:12:25,688] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-25 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:14:21 policy-pap | [2024-04-09T14:12:46.453+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-87d34be7-6039-47df-ad80-62271f3f875b PdpStateChange stopping 14:14:21 kafka | [2024-04-09 14:12:25,688] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-40 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:14:21 policy-pap | [2024-04-09T14:12:46.453+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-87d34be7-6039-47df-ad80-62271f3f875b PdpStateChange stopping enqueue 14:14:21 kafka | [2024-04-09 14:12:25,688] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-47 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:14:21 policy-pap | [2024-04-09T14:12:46.453+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-87d34be7-6039-47df-ad80-62271f3f875b PdpStateChange stopping timer 14:14:21 kafka | [2024-04-09 14:12:25,689] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-17 in 8 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:14:21 policy-pap | [2024-04-09T14:12:46.453+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer cancelled Timer [name=cb095d6b-1806-48f9-af91-a9c5f08d2e3b, expireMs=1712671996403] 14:14:21 kafka | [2024-04-09 14:12:25,689] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-32 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:14:21 policy-pap | [2024-04-09T14:12:46.453+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-87d34be7-6039-47df-ad80-62271f3f875b PdpStateChange stopping listener 14:14:21 kafka | [2024-04-09 14:12:25,689] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-37 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:14:21 policy-pap | [2024-04-09T14:12:46.453+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-87d34be7-6039-47df-ad80-62271f3f875b PdpStateChange stopped 14:14:21 kafka | [2024-04-09 14:12:25,689] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-7 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:14:21 policy-pap | [2024-04-09T14:12:46.453+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-87d34be7-6039-47df-ad80-62271f3f875b PdpStateChange successful 14:14:21 kafka | [2024-04-09 14:12:25,689] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-22 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:14:21 policy-pap | [2024-04-09T14:12:46.453+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-87d34be7-6039-47df-ad80-62271f3f875b start publishing next request 14:14:21 kafka | [2024-04-09 14:12:25,689] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-29 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:14:21 policy-pap | [2024-04-09T14:12:46.453+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-87d34be7-6039-47df-ad80-62271f3f875b PdpUpdate starting 14:14:21 kafka | [2024-04-09 14:12:25,689] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-44 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:14:21 policy-pap | [2024-04-09T14:12:46.453+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-87d34be7-6039-47df-ad80-62271f3f875b PdpUpdate starting listener 14:14:21 kafka | [2024-04-09 14:12:25,689] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-14 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:14:21 policy-pap | [2024-04-09T14:12:46.454+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-87d34be7-6039-47df-ad80-62271f3f875b PdpUpdate starting timer 14:14:21 kafka | [2024-04-09 14:12:25,690] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-23 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:14:21 policy-pap | [2024-04-09T14:12:46.454+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer registered Timer [name=6626c05b-9878-4bec-8cb9-fdf1ff33442a, expireMs=1712671996454] 14:14:21 kafka | [2024-04-09 14:12:25,690] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-38 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:14:21 policy-pap | [2024-04-09T14:12:46.454+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-87d34be7-6039-47df-ad80-62271f3f875b PdpUpdate starting enqueue 14:14:21 kafka | [2024-04-09 14:12:25,690] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-8 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:14:21 policy-pap | [2024-04-09T14:12:46.454+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-87d34be7-6039-47df-ad80-62271f3f875b PdpUpdate started 14:14:21 kafka | [2024-04-09 14:12:25,690] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-45 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:14:21 policy-pap | [2024-04-09T14:12:46.454+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 14:14:21 kafka | [2024-04-09 14:12:25,690] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-15 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:14:21 policy-pap | {"source":"pap-d567b5c7-abc8-4867-b3e8-f75d8faeecf1","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"6626c05b-9878-4bec-8cb9-fdf1ff33442a","timestampMs":1712671966430,"name":"apex-87d34be7-6039-47df-ad80-62271f3f875b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 14:14:21 kafka | [2024-04-09 14:12:25,690] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-30 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:14:21 policy-pap | [2024-04-09T14:12:46.454+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id f35c2eaa-9447-4409-bc81-28e3583921e3 14:14:21 kafka | [2024-04-09 14:12:25,690] INFO [Broker id=1] Finished LeaderAndIsr request in 674ms correlationId 1 from controller 1 for 51 partitions (state.change.logger) 14:14:21 policy-pap | [2024-04-09T14:12:46.462+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 14:14:21 kafka | [2024-04-09 14:12:25,690] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-0 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:14:21 policy-pap | {"source":"pap-d567b5c7-abc8-4867-b3e8-f75d8faeecf1","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"cb095d6b-1806-48f9-af91-a9c5f08d2e3b","timestampMs":1712671966320,"name":"apex-87d34be7-6039-47df-ad80-62271f3f875b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 14:14:21 kafka | [2024-04-09 14:12:25,691] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-35 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:14:21 policy-pap | [2024-04-09T14:12:46.464+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_STATE_CHANGE 14:14:21 kafka | [2024-04-09 14:12:25,691] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-5 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:14:21 policy-pap | [2024-04-09T14:12:46.466+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 14:14:21 kafka | [2024-04-09 14:12:25,691] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-20 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:14:21 policy-pap | {"source":"pap-d567b5c7-abc8-4867-b3e8-f75d8faeecf1","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"6626c05b-9878-4bec-8cb9-fdf1ff33442a","timestampMs":1712671966430,"name":"apex-87d34be7-6039-47df-ad80-62271f3f875b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 14:14:21 kafka | [2024-04-09 14:12:25,691] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-27 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:14:21 kafka | [2024-04-09 14:12:25,691] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-42 in 7 milliseconds for epoch 0, of which 7 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:14:21 kafka | [2024-04-09 14:12:25,691] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-12 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:14:21 kafka | [2024-04-09 14:12:25,691] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-21 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:14:21 kafka | [2024-04-09 14:12:25,692] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-36 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:14:21 kafka | [2024-04-09 14:12:25,692] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-6 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:14:21 kafka | [2024-04-09 14:12:25,692] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-43 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:14:21 kafka | [2024-04-09 14:12:25,692] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-13 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:14:21 kafka | [2024-04-09 14:12:25,692] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-28 in 6 milliseconds for epoch 0, of which 6 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 14:14:21 kafka | [2024-04-09 14:12:25,694] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=JIxyITR5QGSmI5P2pGX22A, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=13, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=46, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=9, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=42, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=21, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=17, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=30, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=26, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=5, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=38, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=1, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=34, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=16, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=45, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=12, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=41, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=24, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=20, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=49, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=29, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=25, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=8, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=37, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=4, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=33, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=15, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=48, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=11, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=44, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=23, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=19, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=32, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=28, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=7, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=40, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=3, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=36, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=47, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=14, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=43, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=10, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=22, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=18, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=31, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=27, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=39, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=6, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=35, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=2, errorCode=0)]), LeaderAndIsrTopicError(topicId=ITmYpZ6rSK-iF5o_1J2T3Q, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 1 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 14:14:21 kafka | [2024-04-09 14:12:25,701] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition policy-pdp-pap-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 14:14:21 kafka | [2024-04-09 14:12:25,701] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-13 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 14:14:21 kafka | [2024-04-09 14:12:25,701] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-46 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 14:14:21 kafka | [2024-04-09 14:12:25,701] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-9 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 14:14:21 kafka | [2024-04-09 14:12:25,701] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-42 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 14:14:21 kafka | [2024-04-09 14:12:25,701] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-21 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 14:14:21 kafka | [2024-04-09 14:12:25,701] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-17 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 14:14:21 kafka | [2024-04-09 14:12:25,701] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-30 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 14:14:21 kafka | [2024-04-09 14:12:25,701] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-26 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 14:14:21 kafka | [2024-04-09 14:12:25,701] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-5 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 14:14:21 kafka | [2024-04-09 14:12:25,701] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-38 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 14:14:21 kafka | [2024-04-09 14:12:25,701] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-1 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 14:14:21 kafka | [2024-04-09 14:12:25,701] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-34 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 14:14:21 kafka | [2024-04-09 14:12:25,701] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-16 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 14:14:21 kafka | [2024-04-09 14:12:25,701] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-45 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 14:14:21 kafka | [2024-04-09 14:12:25,701] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-12 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 14:14:21 kafka | [2024-04-09 14:12:25,701] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-41 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 14:14:21 kafka | [2024-04-09 14:12:25,701] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-24 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 14:14:21 kafka | [2024-04-09 14:12:25,701] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-20 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 14:14:21 kafka | [2024-04-09 14:12:25,701] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-49 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 14:14:21 kafka | [2024-04-09 14:12:25,701] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 14:14:21 kafka | [2024-04-09 14:12:25,701] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-29 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 14:14:21 kafka | [2024-04-09 14:12:25,701] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-25 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 14:14:21 kafka | [2024-04-09 14:12:25,701] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-8 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 14:14:21 kafka | [2024-04-09 14:12:25,701] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-37 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 14:14:21 kafka | [2024-04-09 14:12:25,701] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-4 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 14:14:21 kafka | [2024-04-09 14:12:25,701] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-33 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 14:14:21 kafka | [2024-04-09 14:12:25,701] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-15 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 14:14:21 kafka | [2024-04-09 14:12:25,701] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-48 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 14:14:21 kafka | [2024-04-09 14:12:25,701] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-11 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 14:14:21 kafka | [2024-04-09 14:12:25,701] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-44 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 14:14:21 kafka | [2024-04-09 14:12:25,701] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-23 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 14:14:21 kafka | [2024-04-09 14:12:25,701] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-19 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 14:14:21 kafka | [2024-04-09 14:12:25,701] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-32 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 14:14:21 kafka | [2024-04-09 14:12:25,701] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-28 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 14:14:21 kafka | [2024-04-09 14:12:25,701] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-7 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 14:14:21 kafka | [2024-04-09 14:12:25,701] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-40 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 14:14:21 kafka | [2024-04-09 14:12:25,701] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-3 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 14:14:21 kafka | [2024-04-09 14:12:25,701] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-36 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 14:14:21 kafka | [2024-04-09 14:12:25,701] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-47 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 14:14:21 kafka | [2024-04-09 14:12:25,701] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-14 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 14:14:21 kafka | [2024-04-09 14:12:25,701] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-43 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 14:14:21 kafka | [2024-04-09 14:12:25,701] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-10 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 14:14:21 kafka | [2024-04-09 14:12:25,701] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-22 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 14:14:21 kafka | [2024-04-09 14:12:25,701] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-18 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 14:14:21 kafka | [2024-04-09 14:12:25,701] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-31 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 14:14:21 kafka | [2024-04-09 14:12:25,701] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-27 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 14:14:21 kafka | [2024-04-09 14:12:25,701] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-39 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 14:14:21 kafka | [2024-04-09 14:12:25,701] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-6 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 14:14:21 kafka | [2024-04-09 14:12:25,701] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-35 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 14:14:21 kafka | [2024-04-09 14:12:25,701] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-2 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 14:14:21 kafka | [2024-04-09 14:12:25,702] INFO [Broker id=1] Add 51 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 14:14:21 kafka | [2024-04-09 14:12:25,703] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 2 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 14:14:21 kafka | [2024-04-09 14:12:25,766] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group policy-pap in Empty state. Created a new member id consumer-policy-pap-4-cc6c70d5-9f7a-4a85-8bbc-f691b3908254 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 14:14:21 kafka | [2024-04-09 14:12:25,779] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group 8886bf5a-38da-4c7c-af7d-ca09814a22ad in Empty state. Created a new member id consumer-8886bf5a-38da-4c7c-af7d-ca09814a22ad-3-937938bf-de0f-412f-b074-806532e1f873 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 14:14:21 kafka | [2024-04-09 14:12:25,794] INFO [GroupCoordinator 1]: Preparing to rebalance group policy-pap in state PreparingRebalance with old generation 0 (__consumer_offsets-24) (reason: Adding new member consumer-policy-pap-4-cc6c70d5-9f7a-4a85-8bbc-f691b3908254 with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) 14:14:21 kafka | [2024-04-09 14:12:25,794] INFO [GroupCoordinator 1]: Preparing to rebalance group 8886bf5a-38da-4c7c-af7d-ca09814a22ad in state PreparingRebalance with old generation 0 (__consumer_offsets-23) (reason: Adding new member consumer-8886bf5a-38da-4c7c-af7d-ca09814a22ad-3-937938bf-de0f-412f-b074-806532e1f873 with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) 14:14:21 kafka | [2024-04-09 14:12:26,530] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group 5bf355d1-b191-4690-8ff2-dd6842394381 in Empty state. Created a new member id consumer-5bf355d1-b191-4690-8ff2-dd6842394381-2-780b758d-7817-467f-b505-47072bd7ea3f and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 14:14:21 kafka | [2024-04-09 14:12:26,534] INFO [GroupCoordinator 1]: Preparing to rebalance group 5bf355d1-b191-4690-8ff2-dd6842394381 in state PreparingRebalance with old generation 0 (__consumer_offsets-27) (reason: Adding new member consumer-5bf355d1-b191-4690-8ff2-dd6842394381-2-780b758d-7817-467f-b505-47072bd7ea3f with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) 14:14:21 kafka | [2024-04-09 14:12:28,808] INFO [GroupCoordinator 1]: Stabilized group policy-pap generation 1 (__consumer_offsets-24) with 1 members (kafka.coordinator.group.GroupCoordinator) 14:14:21 kafka | [2024-04-09 14:12:28,811] INFO [GroupCoordinator 1]: Stabilized group 8886bf5a-38da-4c7c-af7d-ca09814a22ad generation 1 (__consumer_offsets-23) with 1 members (kafka.coordinator.group.GroupCoordinator) 14:14:21 kafka | [2024-04-09 14:12:28,828] INFO [GroupCoordinator 1]: Assignment received from leader consumer-policy-pap-4-cc6c70d5-9f7a-4a85-8bbc-f691b3908254 for group policy-pap for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 14:14:21 kafka | [2024-04-09 14:12:28,830] INFO [GroupCoordinator 1]: Assignment received from leader consumer-8886bf5a-38da-4c7c-af7d-ca09814a22ad-3-937938bf-de0f-412f-b074-806532e1f873 for group 8886bf5a-38da-4c7c-af7d-ca09814a22ad for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 14:14:21 kafka | [2024-04-09 14:12:29,534] INFO [GroupCoordinator 1]: Stabilized group 5bf355d1-b191-4690-8ff2-dd6842394381 generation 1 (__consumer_offsets-27) with 1 members (kafka.coordinator.group.GroupCoordinator) 14:14:21 kafka | [2024-04-09 14:12:29,547] INFO [GroupCoordinator 1]: Assignment received from leader consumer-5bf355d1-b191-4690-8ff2-dd6842394381-2-780b758d-7817-467f-b505-47072bd7ea3f for group 5bf355d1-b191-4690-8ff2-dd6842394381 for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 14:14:21 policy-pap | [2024-04-09T14:12:46.466+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE 14:14:21 policy-pap | [2024-04-09T14:12:46.474+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 14:14:21 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"cb095d6b-1806-48f9-af91-a9c5f08d2e3b","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"d74c0d11-396a-45ee-aca8-c28faceef757","timestampMs":1712671966414,"name":"apex-87d34be7-6039-47df-ad80-62271f3f875b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 14:14:21 policy-pap | [2024-04-09T14:12:46.474+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id cb095d6b-1806-48f9-af91-a9c5f08d2e3b 14:14:21 policy-pap | [2024-04-09T14:12:46.477+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 14:14:21 policy-pap | {"source":"pap-d567b5c7-abc8-4867-b3e8-f75d8faeecf1","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"6626c05b-9878-4bec-8cb9-fdf1ff33442a","timestampMs":1712671966430,"name":"apex-87d34be7-6039-47df-ad80-62271f3f875b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 14:14:21 policy-pap | [2024-04-09T14:12:46.478+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE 14:14:21 policy-pap | [2024-04-09T14:12:46.480+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 14:14:21 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"6626c05b-9878-4bec-8cb9-fdf1ff33442a","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"b2ae2d05-b707-414a-b0f7-ae1aec005c8b","timestampMs":1712671966465,"name":"apex-87d34be7-6039-47df-ad80-62271f3f875b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 14:14:21 policy-pap | [2024-04-09T14:12:46.481+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 14:14:21 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"6626c05b-9878-4bec-8cb9-fdf1ff33442a","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"b2ae2d05-b707-414a-b0f7-ae1aec005c8b","timestampMs":1712671966465,"name":"apex-87d34be7-6039-47df-ad80-62271f3f875b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 14:14:21 policy-pap | [2024-04-09T14:12:46.481+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-87d34be7-6039-47df-ad80-62271f3f875b PdpUpdate stopping 14:14:21 policy-pap | [2024-04-09T14:12:46.481+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-87d34be7-6039-47df-ad80-62271f3f875b PdpUpdate stopping enqueue 14:14:21 policy-pap | [2024-04-09T14:12:46.481+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-87d34be7-6039-47df-ad80-62271f3f875b PdpUpdate stopping timer 14:14:21 policy-pap | [2024-04-09T14:12:46.481+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=6626c05b-9878-4bec-8cb9-fdf1ff33442a, expireMs=1712671996454] 14:14:21 policy-pap | [2024-04-09T14:12:46.481+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-87d34be7-6039-47df-ad80-62271f3f875b PdpUpdate stopping listener 14:14:21 policy-pap | [2024-04-09T14:12:46.481+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-87d34be7-6039-47df-ad80-62271f3f875b PdpUpdate stopped 14:14:21 policy-pap | [2024-04-09T14:12:46.481+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 6626c05b-9878-4bec-8cb9-fdf1ff33442a 14:14:21 policy-pap | [2024-04-09T14:12:46.486+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-87d34be7-6039-47df-ad80-62271f3f875b PdpUpdate successful 14:14:21 policy-pap | [2024-04-09T14:12:46.486+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-87d34be7-6039-47df-ad80-62271f3f875b has no more requests 14:14:21 policy-pap | [2024-04-09T14:12:50.887+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls 14:14:21 policy-pap | [2024-04-09T14:12:50.894+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls 14:14:21 policy-pap | [2024-04-09T14:12:51.294+00:00|INFO|SessionData|http-nio-6969-exec-6] unknown group testGroup 14:14:21 policy-pap | [2024-04-09T14:12:51.825+00:00|INFO|SessionData|http-nio-6969-exec-6] create cached group testGroup 14:14:21 policy-pap | [2024-04-09T14:12:51.826+00:00|INFO|SessionData|http-nio-6969-exec-6] creating DB group testGroup 14:14:21 policy-pap | [2024-04-09T14:12:52.354+00:00|INFO|SessionData|http-nio-6969-exec-1] cache group testGroup 14:14:21 policy-pap | [2024-04-09T14:12:52.562+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-1] Registering a deploy for policy onap.restart.tca 1.0.0 14:14:21 policy-pap | [2024-04-09T14:12:52.650+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-1] Registering a deploy for policy operational.apex.decisionMaker 1.0.0 14:14:21 policy-pap | [2024-04-09T14:12:52.650+00:00|INFO|SessionData|http-nio-6969-exec-1] update cached group testGroup 14:14:21 policy-pap | [2024-04-09T14:12:52.650+00:00|INFO|SessionData|http-nio-6969-exec-1] updating DB group testGroup 14:14:21 policy-pap | [2024-04-09T14:12:52.664+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-1] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeA, policy=onap.restart.tca 1.0.0, action=DEPLOYMENT, timestamp=2024-04-09T14:12:52Z, user=policyadmin), PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeC, policy=operational.apex.decisionMaker 1.0.0, action=DEPLOYMENT, timestamp=2024-04-09T14:12:52Z, user=policyadmin)] 14:14:21 policy-pap | [2024-04-09T14:12:53.394+00:00|INFO|SessionData|http-nio-6969-exec-5] cache group testGroup 14:14:21 policy-pap | [2024-04-09T14:12:53.395+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-5] remove policy onap.restart.tca 1.0.0 from subgroup testGroup pdpTypeA count=0 14:14:21 policy-pap | [2024-04-09T14:12:53.395+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-5] Registering an undeploy for policy onap.restart.tca 1.0.0 14:14:21 policy-pap | [2024-04-09T14:12:53.396+00:00|INFO|SessionData|http-nio-6969-exec-5] update cached group testGroup 14:14:21 policy-pap | [2024-04-09T14:12:53.396+00:00|INFO|SessionData|http-nio-6969-exec-5] updating DB group testGroup 14:14:21 policy-pap | [2024-04-09T14:12:53.409+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-5] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeA, policy=onap.restart.tca 1.0.0, action=UNDEPLOYMENT, timestamp=2024-04-09T14:12:53Z, user=policyadmin)] 14:14:21 policy-pap | [2024-04-09T14:12:53.714+00:00|INFO|SessionData|http-nio-6969-exec-7] cache group defaultGroup 14:14:21 policy-pap | [2024-04-09T14:12:53.714+00:00|INFO|SessionData|http-nio-6969-exec-7] cache group testGroup 14:14:21 policy-pap | [2024-04-09T14:12:53.714+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-7] remove policy operational.apex.decisionMaker 1.0.0 from subgroup testGroup pdpTypeC count=0 14:14:21 policy-pap | [2024-04-09T14:12:53.714+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-7] Registering an undeploy for policy operational.apex.decisionMaker 1.0.0 14:14:21 policy-pap | [2024-04-09T14:12:53.715+00:00|INFO|SessionData|http-nio-6969-exec-7] update cached group testGroup 14:14:21 policy-pap | [2024-04-09T14:12:53.715+00:00|INFO|SessionData|http-nio-6969-exec-7] updating DB group testGroup 14:14:21 policy-pap | [2024-04-09T14:12:53.727+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-7] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeC, policy=operational.apex.decisionMaker 1.0.0, action=UNDEPLOYMENT, timestamp=2024-04-09T14:12:53Z, user=policyadmin)] 14:14:21 policy-pap | [2024-04-09T14:13:14.318+00:00|INFO|SessionData|http-nio-6969-exec-1] cache group testGroup 14:14:21 policy-pap | [2024-04-09T14:13:14.320+00:00|INFO|SessionData|http-nio-6969-exec-1] deleting DB group testGroup 14:14:21 policy-pap | [2024-04-09T14:13:16.338+00:00|INFO|TimerManager|Thread-9] update timer discarded (expired) Timer [name=f35c2eaa-9447-4409-bc81-28e3583921e3, expireMs=1712671996337] 14:14:21 policy-pap | [2024-04-09T14:13:16.403+00:00|INFO|TimerManager|Thread-10] state-change timer discarded (expired) Timer [name=cb095d6b-1806-48f9-af91-a9c5f08d2e3b, expireMs=1712671996403] 14:14:21 ++ echo 'Tearing down containers...' 14:14:21 Tearing down containers... 14:14:21 ++ docker-compose down -v --remove-orphans 14:14:22 Stopping policy-apex-pdp ... 14:14:22 Stopping policy-pap ... 14:14:22 Stopping kafka ... 14:14:22 Stopping grafana ... 14:14:22 Stopping policy-api ... 14:14:22 Stopping compose_zookeeper_1 ... 14:14:22 Stopping mariadb ... 14:14:22 Stopping simulator ... 14:14:22 Stopping prometheus ... 14:14:23 Stopping grafana ... done 14:14:23 Stopping prometheus ... done 14:14:32 Stopping policy-apex-pdp ... done 14:14:43 Stopping simulator ... done 14:14:43 Stopping policy-pap ... done 14:14:43 Stopping mariadb ... done 14:14:44 Stopping kafka ... done 14:14:44 Stopping compose_zookeeper_1 ... done 14:14:53 Stopping policy-api ... done 14:14:53 Removing policy-apex-pdp ... 14:14:53 Removing policy-pap ... 14:14:53 Removing kafka ... 14:14:53 Removing grafana ... 14:14:53 Removing policy-api ... 14:14:53 Removing policy-db-migrator ... 14:14:53 Removing compose_zookeeper_1 ... 14:14:53 Removing mariadb ... 14:14:53 Removing simulator ... 14:14:53 Removing prometheus ... 14:14:53 Removing compose_zookeeper_1 ... done 14:14:53 Removing simulator ... done 14:14:53 Removing grafana ... done 14:14:53 Removing mariadb ... done 14:14:53 Removing prometheus ... done 14:14:53 Removing policy-apex-pdp ... done 14:14:53 Removing policy-pap ... done 14:14:53 Removing policy-api ... done 14:14:53 Removing policy-db-migrator ... done 14:14:53 Removing kafka ... done 14:14:53 Removing network compose_default 14:14:53 ++ cd /w/workspace/policy-pap-master-project-csit-pap 14:14:53 + load_set 14:14:53 + _setopts=hxB 14:14:53 ++ echo braceexpand:hashall:interactive-comments:xtrace 14:14:53 ++ tr : ' ' 14:14:53 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 14:14:53 + set +o braceexpand 14:14:53 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 14:14:53 + set +o hashall 14:14:53 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 14:14:53 + set +o interactive-comments 14:14:53 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 14:14:53 + set +o xtrace 14:14:53 ++ echo hxB 14:14:53 ++ sed 's/./& /g' 14:14:53 + for i in $(echo "$_setopts" | sed 's/./& /g') 14:14:53 + set +h 14:14:53 + for i in $(echo "$_setopts" | sed 's/./& /g') 14:14:53 + set +x 14:14:53 + [[ -n /tmp/tmp.6QreRUgV9i ]] 14:14:53 + rsync -av /tmp/tmp.6QreRUgV9i/ /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap 14:14:53 sending incremental file list 14:14:53 ./ 14:14:53 log.html 14:14:53 output.xml 14:14:53 report.html 14:14:53 testplan.txt 14:14:53 14:14:53 sent 919,740 bytes received 95 bytes 1,839,670.00 bytes/sec 14:14:53 total size is 919,195 speedup is 1.00 14:14:53 + rm -rf /w/workspace/policy-pap-master-project-csit-pap/models 14:14:53 + exit 0 14:14:53 $ ssh-agent -k 14:14:53 unset SSH_AUTH_SOCK; 14:14:53 unset SSH_AGENT_PID; 14:14:53 echo Agent pid 2081 killed; 14:14:53 [ssh-agent] Stopped. 14:14:53 Robot results publisher started... 14:14:53 INFO: Checking test criticality is deprecated and will be dropped in a future release! 14:14:53 -Parsing output xml: 14:14:54 Done! 14:14:54 WARNING! Could not find file: **/log.html 14:14:54 WARNING! Could not find file: **/report.html 14:14:54 -Copying log files to build dir: 14:14:54 Done! 14:14:54 -Assigning results to build: 14:14:54 Done! 14:14:54 -Checking thresholds: 14:14:54 Done! 14:14:54 Done publishing Robot results. 14:14:54 [PostBuildScript] - [INFO] Executing post build scripts. 14:14:54 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins13453795783583277534.sh 14:14:54 ---> sysstat.sh 14:14:55 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins8644787431641409023.sh 14:14:55 ---> package-listing.sh 14:14:55 ++ tr '[:upper:]' '[:lower:]' 14:14:55 ++ facter osfamily 14:14:55 + OS_FAMILY=debian 14:14:55 + workspace=/w/workspace/policy-pap-master-project-csit-pap 14:14:55 + START_PACKAGES=/tmp/packages_start.txt 14:14:55 + END_PACKAGES=/tmp/packages_end.txt 14:14:55 + DIFF_PACKAGES=/tmp/packages_diff.txt 14:14:55 + PACKAGES=/tmp/packages_start.txt 14:14:55 + '[' /w/workspace/policy-pap-master-project-csit-pap ']' 14:14:55 + PACKAGES=/tmp/packages_end.txt 14:14:55 + case "${OS_FAMILY}" in 14:14:55 + dpkg -l 14:14:55 + grep '^ii' 14:14:55 + '[' -f /tmp/packages_start.txt ']' 14:14:55 + '[' -f /tmp/packages_end.txt ']' 14:14:55 + diff /tmp/packages_start.txt /tmp/packages_end.txt 14:14:55 + '[' /w/workspace/policy-pap-master-project-csit-pap ']' 14:14:55 + mkdir -p /w/workspace/policy-pap-master-project-csit-pap/archives/ 14:14:55 + cp -f /tmp/packages_diff.txt /tmp/packages_end.txt /tmp/packages_start.txt /w/workspace/policy-pap-master-project-csit-pap/archives/ 14:14:55 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins7656286616039403647.sh 14:14:55 ---> capture-instance-metadata.sh 14:14:55 Setup pyenv: 14:14:55 system 14:14:55 3.8.13 14:14:55 3.9.13 14:14:55 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) 14:14:55 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-3z0W from file:/tmp/.os_lf_venv 14:14:57 lf-activate-venv(): INFO: Installing: lftools 14:15:06 lf-activate-venv(): INFO: Adding /tmp/venv-3z0W/bin to PATH 14:15:06 INFO: Running in OpenStack, capturing instance metadata 14:15:07 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins875017452155230249.sh 14:15:07 provisioning config files... 14:15:07 copy managed file [jenkins-log-archives-settings] to file:/w/workspace/policy-pap-master-project-csit-pap@tmp/config10716431719136852717tmp 14:15:07 Regular expression run condition: Expression=[^.*logs-s3.*], Label=[] 14:15:07 Run condition [Regular expression match] preventing perform for step [Provide Configuration files] 14:15:07 [EnvInject] - Injecting environment variables from a build step. 14:15:07 [EnvInject] - Injecting as environment variables the properties content 14:15:07 SERVER_ID=logs 14:15:07 14:15:07 [EnvInject] - Variables injected successfully. 14:15:07 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins12302234228367470158.sh 14:15:07 ---> create-netrc.sh 14:15:07 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins1361430092756309133.sh 14:15:07 ---> python-tools-install.sh 14:15:07 Setup pyenv: 14:15:07 system 14:15:07 3.8.13 14:15:07 3.9.13 14:15:07 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) 14:15:07 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-3z0W from file:/tmp/.os_lf_venv 14:15:08 lf-activate-venv(): INFO: Installing: lftools 14:15:17 lf-activate-venv(): INFO: Adding /tmp/venv-3z0W/bin to PATH 14:15:17 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins6054504162437819749.sh 14:15:17 ---> sudo-logs.sh 14:15:17 Archiving 'sudo' log.. 14:15:17 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins5241280818783578738.sh 14:15:17 ---> job-cost.sh 14:15:17 Setup pyenv: 14:15:17 system 14:15:17 3.8.13 14:15:17 3.9.13 14:15:17 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) 14:15:18 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-3z0W from file:/tmp/.os_lf_venv 14:15:19 lf-activate-venv(): INFO: Installing: zipp==1.1.0 python-openstackclient urllib3~=1.26.15 14:15:24 lf-activate-venv(): INFO: Adding /tmp/venv-3z0W/bin to PATH 14:15:24 INFO: No Stack... 14:15:24 INFO: Retrieving Pricing Info for: v3-standard-8 14:15:24 INFO: Archiving Costs 14:15:24 [policy-pap-master-project-csit-pap] $ /bin/bash -l /tmp/jenkins11014231714605064695.sh 14:15:24 ---> logs-deploy.sh 14:15:24 Setup pyenv: 14:15:24 system 14:15:24 3.8.13 14:15:24 3.9.13 14:15:24 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) 14:15:24 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-3z0W from file:/tmp/.os_lf_venv 14:15:26 lf-activate-venv(): INFO: Installing: lftools 14:15:34 lf-activate-venv(): INFO: Adding /tmp/venv-3z0W/bin to PATH 14:15:34 INFO: Nexus URL https://nexus.onap.org path production/vex-yul-ecomp-jenkins-1/policy-pap-master-project-csit-pap/1638 14:15:34 INFO: archiving workspace using pattern(s): -p **/target/surefire-reports/*-output.txt 14:15:35 Archives upload complete. 14:15:35 INFO: archiving logs to Nexus 14:15:36 ---> uname -a: 14:15:36 Linux prd-ubuntu1804-docker-8c-8g-21829 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux 14:15:36 14:15:36 14:15:36 ---> lscpu: 14:15:36 Architecture: x86_64 14:15:36 CPU op-mode(s): 32-bit, 64-bit 14:15:36 Byte Order: Little Endian 14:15:36 CPU(s): 8 14:15:36 On-line CPU(s) list: 0-7 14:15:36 Thread(s) per core: 1 14:15:36 Core(s) per socket: 1 14:15:36 Socket(s): 8 14:15:36 NUMA node(s): 1 14:15:36 Vendor ID: AuthenticAMD 14:15:36 CPU family: 23 14:15:36 Model: 49 14:15:36 Model name: AMD EPYC-Rome Processor 14:15:36 Stepping: 0 14:15:36 CPU MHz: 2800.000 14:15:36 BogoMIPS: 5600.00 14:15:36 Virtualization: AMD-V 14:15:36 Hypervisor vendor: KVM 14:15:36 Virtualization type: full 14:15:36 L1d cache: 32K 14:15:36 L1i cache: 32K 14:15:36 L2 cache: 512K 14:15:36 L3 cache: 16384K 14:15:36 NUMA node0 CPU(s): 0-7 14:15:36 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl xtopology cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr arat npt nrip_save umip rdpid arch_capabilities 14:15:36 14:15:36 14:15:36 ---> nproc: 14:15:36 8 14:15:36 14:15:36 14:15:36 ---> df -h: 14:15:36 Filesystem Size Used Avail Use% Mounted on 14:15:36 udev 16G 0 16G 0% /dev 14:15:36 tmpfs 3.2G 708K 3.2G 1% /run 14:15:36 /dev/vda1 155G 14G 142G 9% / 14:15:36 tmpfs 16G 0 16G 0% /dev/shm 14:15:36 tmpfs 5.0M 0 5.0M 0% /run/lock 14:15:36 tmpfs 16G 0 16G 0% /sys/fs/cgroup 14:15:36 /dev/vda15 105M 4.4M 100M 5% /boot/efi 14:15:36 tmpfs 3.2G 0 3.2G 0% /run/user/1001 14:15:36 14:15:36 14:15:36 ---> free -m: 14:15:36 total used free shared buff/cache available 14:15:36 Mem: 32167 833 25115 0 6218 30877 14:15:36 Swap: 1023 0 1023 14:15:36 14:15:36 14:15:36 ---> ip addr: 14:15:36 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 14:15:36 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 14:15:36 inet 127.0.0.1/8 scope host lo 14:15:36 valid_lft forever preferred_lft forever 14:15:36 inet6 ::1/128 scope host 14:15:36 valid_lft forever preferred_lft forever 14:15:36 2: ens3: mtu 1458 qdisc mq state UP group default qlen 1000 14:15:36 link/ether fa:16:3e:31:73:a4 brd ff:ff:ff:ff:ff:ff 14:15:36 inet 10.30.107.36/23 brd 10.30.107.255 scope global dynamic ens3 14:15:36 valid_lft 85907sec preferred_lft 85907sec 14:15:36 inet6 fe80::f816:3eff:fe31:73a4/64 scope link 14:15:36 valid_lft forever preferred_lft forever 14:15:36 3: docker0: mtu 1500 qdisc noqueue state DOWN group default 14:15:36 link/ether 02:42:8e:78:67:97 brd ff:ff:ff:ff:ff:ff 14:15:36 inet 10.250.0.254/24 brd 10.250.0.255 scope global docker0 14:15:36 valid_lft forever preferred_lft forever 14:15:36 14:15:36 14:15:36 ---> sar -b -r -n DEV: 14:15:36 Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-21829) 04/09/24 _x86_64_ (8 CPU) 14:15:36 14:15:36 14:07:25 LINUX RESTART (8 CPU) 14:15:36 14:15:36 14:08:02 tps rtps wtps bread/s bwrtn/s 14:15:36 14:09:01 118.79 45.38 73.41 2031.66 26365.57 14:15:36 14:10:01 106.28 13.86 92.42 1126.21 28830.26 14:15:36 14:11:01 105.57 9.55 96.02 1688.52 41283.12 14:15:36 14:12:01 464.41 11.68 452.72 775.54 134189.72 14:15:36 14:13:01 30.51 0.38 30.13 31.46 23144.83 14:15:36 14:14:01 16.25 0.00 16.25 0.00 18910.78 14:15:36 14:15:01 64.32 0.88 63.44 45.33 21415.90 14:15:36 Average: 129.47 11.60 117.88 811.20 42057.38 14:15:36 14:15:36 14:08:02 kbmemfree kbavail kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty 14:15:36 14:09:01 30168756 31731024 2770464 8.41 67832 1806996 1445620 4.25 839104 1641680 136412 14:15:36 14:10:01 29892100 31727572 3047120 9.25 84488 2048636 1425832 4.20 856172 1875232 121992 14:15:36 14:11:01 27064936 31676632 5874284 17.83 129916 4662640 1397440 4.11 1000808 4400468 2092788 14:15:36 14:12:01 24794100 30737484 8145120 24.73 155660 5898008 7826208 23.03 2058944 5491624 120 14:15:36 14:13:01 23543692 29603760 9395528 28.52 157420 6009204 8835108 25.99 3266720 5521116 520 14:15:36 14:14:01 23475924 29536860 9463296 28.73 157548 6009784 8852500 26.05 3335400 5521060 212 14:15:36 14:15:01 25702144 31595028 7237076 21.97 158944 5858220 1574548 4.63 1319716 5374016 2948 14:15:36 Average: 26377379 30944051 6561841 19.92 130258 4613355 4479608 13.18 1810981 4260742 336427 14:15:36 14:15:36 14:08:02 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil 14:15:36 14:09:01 ens3 296.17 192.87 1345.97 55.14 0.00 0.00 0.00 0.00 14:15:36 14:09:01 lo 1.49 1.49 0.17 0.17 0.00 0.00 0.00 0.00 14:15:36 14:09:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:15:36 14:10:01 ens3 42.33 31.68 606.05 6.74 0.00 0.00 0.00 0.00 14:15:36 14:10:01 lo 1.13 1.13 0.12 0.12 0.00 0.00 0.00 0.00 14:15:36 14:10:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:15:36 14:11:01 ens3 950.01 473.92 19517.70 35.15 0.00 0.00 0.00 0.00 14:15:36 14:11:01 br-d7c642aca212 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:15:36 14:11:01 lo 9.53 9.53 0.95 0.95 0.00 0.00 0.00 0.00 14:15:36 14:11:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:15:36 14:12:01 veth41c52b9 0.05 0.32 0.00 0.02 0.00 0.00 0.00 0.00 14:15:36 14:12:01 vethe5e45da 1.77 1.85 0.17 0.18 0.00 0.00 0.00 0.00 14:15:36 14:12:01 ens3 376.25 209.12 12289.98 14.69 0.00 0.00 0.00 0.00 14:15:36 14:12:01 br-d7c642aca212 0.63 0.52 0.05 0.29 0.00 0.00 0.00 0.00 14:15:36 14:13:01 veth41c52b9 0.50 0.53 0.05 1.32 0.00 0.00 0.00 0.00 14:15:36 14:13:01 vethe5e45da 15.70 13.31 1.97 1.98 0.00 0.00 0.00 0.00 14:15:36 14:13:01 ens3 6.00 4.75 1.45 1.58 0.00 0.00 0.00 0.00 14:15:36 14:13:01 br-d7c642aca212 1.82 2.08 1.75 1.69 0.00 0.00 0.00 0.00 14:15:36 14:14:01 veth41c52b9 0.57 0.58 0.05 1.52 0.00 0.00 0.00 0.00 14:15:36 14:14:01 vethe5e45da 13.83 9.33 1.05 1.34 0.00 0.00 0.00 0.00 14:15:36 14:14:01 ens3 1.65 1.38 0.34 0.27 0.00 0.00 0.00 0.00 14:15:36 14:14:01 br-d7c642aca212 0.85 0.83 0.11 0.08 0.00 0.00 0.00 0.00 14:15:36 14:15:01 ens3 44.91 38.19 66.40 29.13 0.00 0.00 0.00 0.00 14:15:36 14:15:01 lo 35.48 35.48 6.28 6.28 0.00 0.00 0.00 0.00 14:15:36 14:15:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:15:36 Average: ens3 245.21 135.85 4840.87 20.30 0.00 0.00 0.00 0.00 14:15:36 Average: lo 4.53 4.53 0.85 0.85 0.00 0.00 0.00 0.00 14:15:36 Average: docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 14:15:36 14:15:36 14:15:36 ---> sar -P ALL: 14:15:36 Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-21829) 04/09/24 _x86_64_ (8 CPU) 14:15:36 14:15:36 14:07:25 LINUX RESTART (8 CPU) 14:15:36 14:15:36 14:08:02 CPU %user %nice %system %iowait %steal %idle 14:15:36 14:09:01 all 11.07 0.00 0.90 3.31 0.04 84.68 14:15:36 14:09:01 0 19.76 0.00 1.62 1.67 0.03 76.93 14:15:36 14:09:01 1 16.12 0.00 0.95 8.84 0.05 74.04 14:15:36 14:09:01 2 4.13 0.00 0.68 3.68 0.05 91.46 14:15:36 14:09:01 3 14.24 0.00 0.83 1.53 0.03 83.36 14:15:36 14:09:01 4 15.68 0.00 1.11 0.87 0.05 82.29 14:15:36 14:09:01 5 6.23 0.00 0.73 0.46 0.03 92.55 14:15:36 14:09:01 6 10.65 0.00 0.92 0.36 0.03 88.05 14:15:36 14:09:01 7 1.78 0.00 0.37 9.04 0.02 88.79 14:15:36 14:10:01 all 8.14 0.00 0.55 4.22 0.03 87.07 14:15:36 14:10:01 0 2.46 0.00 0.42 7.45 0.03 89.64 14:15:36 14:10:01 1 3.62 0.00 0.38 11.72 0.03 84.24 14:15:36 14:10:01 2 20.40 0.00 1.27 11.09 0.05 67.19 14:15:36 14:10:01 3 12.15 0.00 0.92 1.39 0.03 85.50 14:15:36 14:10:01 4 14.14 0.00 0.60 1.42 0.03 83.80 14:15:36 14:10:01 5 7.77 0.00 0.50 0.33 0.02 91.38 14:15:36 14:10:01 6 3.74 0.00 0.20 0.25 0.02 95.79 14:15:36 14:10:01 7 0.87 0.00 0.12 0.12 0.02 98.88 14:15:36 14:11:01 all 10.39 0.00 4.16 4.09 0.08 81.28 14:15:36 14:11:01 0 9.78 0.00 3.60 1.40 0.07 85.16 14:15:36 14:11:01 1 10.08 0.00 4.59 11.32 0.09 73.92 14:15:36 14:11:01 2 9.46 0.00 2.92 10.66 0.07 76.89 14:15:36 14:11:01 3 12.28 0.00 5.56 1.05 0.09 81.01 14:15:36 14:11:01 4 9.04 0.00 3.57 0.20 0.12 87.07 14:15:36 14:11:01 5 9.89 0.00 4.31 0.22 0.07 85.51 14:15:36 14:11:01 6 11.75 0.00 3.60 1.05 0.07 83.53 14:15:36 14:11:01 7 10.86 0.00 5.14 6.80 0.08 77.10 14:15:36 14:12:01 all 11.23 0.00 3.69 10.95 0.07 74.07 14:15:36 14:12:01 0 13.01 0.00 3.71 10.58 0.05 72.65 14:15:36 14:12:01 1 9.99 0.00 3.60 6.32 0.05 80.04 14:15:36 14:12:01 2 12.56 0.00 4.28 29.31 0.10 53.76 14:15:36 14:12:01 3 9.75 0.00 4.01 11.12 0.07 75.06 14:15:36 14:12:01 4 11.22 0.00 2.99 0.50 0.05 85.23 14:15:36 14:12:01 5 13.41 0.00 3.04 1.76 0.07 81.71 14:15:36 14:12:01 6 10.33 0.00 3.63 5.43 0.05 80.55 14:15:36 14:12:01 7 9.57 0.00 4.26 22.72 0.08 63.37 14:15:36 14:13:01 all 23.71 0.00 2.17 1.15 0.08 72.89 14:15:36 14:13:01 0 24.58 0.00 2.22 0.05 0.08 73.06 14:15:36 14:13:01 1 29.16 0.00 2.61 0.05 0.07 68.11 14:15:36 14:13:01 2 25.99 0.00 2.28 2.96 0.05 68.72 14:15:36 14:13:01 3 16.11 0.00 1.65 0.05 0.08 82.10 14:15:36 14:13:01 4 23.12 0.00 2.13 4.49 0.08 70.19 14:15:36 14:13:01 5 33.08 0.00 3.18 0.03 0.08 63.62 14:15:36 14:13:01 6 16.81 0.00 1.82 1.59 0.10 79.68 14:15:36 14:13:01 7 20.80 0.00 1.47 0.00 0.07 77.66 14:15:36 14:14:01 all 1.20 0.00 0.20 1.32 0.04 97.24 14:15:36 14:14:01 0 1.25 0.00 0.18 0.00 0.05 98.52 14:15:36 14:14:01 1 1.65 0.00 0.40 0.00 0.07 97.88 14:15:36 14:14:01 2 0.83 0.00 0.13 0.00 0.03 99.00 14:15:36 14:14:01 3 0.99 0.00 0.23 0.03 0.07 98.68 14:15:36 14:14:01 4 1.43 0.00 0.18 10.30 0.03 88.05 14:15:36 14:14:01 5 1.08 0.00 0.13 0.02 0.03 98.73 14:15:36 14:14:01 6 1.40 0.00 0.17 0.13 0.02 98.28 14:15:36 14:14:01 7 0.93 0.00 0.10 0.10 0.03 98.83 14:15:36 14:15:01 all 3.02 0.00 0.65 1.49 0.05 94.79 14:15:36 14:15:01 0 1.80 0.00 0.77 0.43 0.03 96.97 14:15:36 14:15:01 1 1.22 0.00 0.72 0.05 0.05 97.96 14:15:36 14:15:01 2 2.79 0.00 0.62 0.10 0.03 96.46 14:15:36 14:15:01 3 1.95 0.00 0.60 0.32 0.05 97.08 14:15:36 14:15:01 4 2.85 0.00 0.43 8.95 0.05 87.72 14:15:36 14:15:01 5 1.79 0.00 0.73 0.02 0.05 97.41 14:15:36 14:15:01 6 1.57 0.00 0.58 1.09 0.03 96.72 14:15:36 14:15:01 7 10.20 0.00 0.77 0.84 0.05 88.15 14:15:36 Average: all 9.81 0.00 1.75 3.78 0.05 84.60 14:15:36 Average: 0 10.34 0.00 1.78 3.08 0.05 84.74 14:15:36 Average: 1 10.24 0.00 1.89 5.45 0.06 82.37 14:15:36 Average: 2 10.89 0.00 1.74 8.23 0.06 79.09 14:15:36 Average: 3 9.61 0.00 1.96 2.20 0.06 86.16 14:15:36 Average: 4 11.04 0.00 1.57 3.85 0.06 83.49 14:15:36 Average: 5 10.46 0.00 1.80 0.41 0.05 87.28 14:15:36 Average: 6 8.02 0.00 1.56 1.41 0.05 88.96 14:15:36 Average: 7 7.86 0.00 1.74 5.62 0.05 84.73 14:15:36 14:15:36 14:15:36