11:44:04 Started by upstream project "policy-pap-master-merge-java" build number 347 11:44:04 originally caused by: 11:44:04 Triggered by Gerrit: https://gerrit.onap.org/r/c/policy/pap/+/137330 11:44:04 Running as SYSTEM 11:44:04 [EnvInject] - Loading node environment variables. 11:44:04 Building remotely on prd-ubuntu1804-docker-8c-8g-7307 (ubuntu1804-docker-8c-8g) in workspace /w/workspace/policy-pap-master-project-csit-pap 11:44:04 [ssh-agent] Looking for ssh-agent implementation... 11:44:04 [ssh-agent] Exec ssh-agent (binary ssh-agent on a remote machine) 11:44:04 $ ssh-agent 11:44:04 SSH_AUTH_SOCK=/tmp/ssh-aDYuFjMjZ4Ez/agent.2114 11:44:04 SSH_AGENT_PID=2116 11:44:04 [ssh-agent] Started. 11:44:04 Running ssh-add (command line suppressed) 11:44:04 Identity added: /w/workspace/policy-pap-master-project-csit-pap@tmp/private_key_8475339276064542234.key (/w/workspace/policy-pap-master-project-csit-pap@tmp/private_key_8475339276064542234.key) 11:44:04 [ssh-agent] Using credentials onap-jobbuiler (Gerrit user) 11:44:04 The recommended git tool is: NONE 11:44:06 using credential onap-jenkins-ssh 11:44:06 Wiping out workspace first. 11:44:06 Cloning the remote Git repository 11:44:06 Cloning repository git://cloud.onap.org/mirror/policy/docker.git 11:44:06 > git init /w/workspace/policy-pap-master-project-csit-pap # timeout=10 11:44:06 Fetching upstream changes from git://cloud.onap.org/mirror/policy/docker.git 11:44:06 > git --version # timeout=10 11:44:06 > git --version # 'git version 2.17.1' 11:44:06 using GIT_SSH to set credentials Gerrit user 11:44:06 Verifying host key using manually-configured host key entries 11:44:06 > git fetch --tags --progress -- git://cloud.onap.org/mirror/policy/docker.git +refs/heads/*:refs/remotes/origin/* # timeout=30 11:44:06 > git config remote.origin.url git://cloud.onap.org/mirror/policy/docker.git # timeout=10 11:44:06 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10 11:44:07 Avoid second fetch 11:44:07 > git rev-parse refs/remotes/origin/master^{commit} # timeout=10 11:44:07 Checking out Revision dd836dc2d2bd379fba19b395c912d32f1bc7ee38 (refs/remotes/origin/master) 11:44:07 > git config core.sparsecheckout # timeout=10 11:44:07 > git checkout -f dd836dc2d2bd379fba19b395c912d32f1bc7ee38 # timeout=30 11:44:07 Commit message: "Update snapshot and/or references of policy/docker to latest snapshots" 11:44:07 > git rev-list --no-walk dd836dc2d2bd379fba19b395c912d32f1bc7ee38 # timeout=10 11:44:07 provisioning config files... 11:44:07 copy managed file [npmrc] to file:/home/jenkins/.npmrc 11:44:07 copy managed file [pipconf] to file:/home/jenkins/.config/pip/pip.conf 11:44:07 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins4475356818208807360.sh 11:44:07 ---> python-tools-install.sh 11:44:07 Setup pyenv: 11:44:07 * system (set by /opt/pyenv/version) 11:44:07 * 3.8.13 (set by /opt/pyenv/version) 11:44:07 * 3.9.13 (set by /opt/pyenv/version) 11:44:07 * 3.10.6 (set by /opt/pyenv/version) 11:44:12 lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-VvqR 11:44:12 lf-activate-venv(): INFO: Save venv in file: /tmp/.os_lf_venv 11:44:15 lf-activate-venv(): INFO: Installing: lftools 11:44:49 lf-activate-venv(): INFO: Adding /tmp/venv-VvqR/bin to PATH 11:44:49 Generating Requirements File 11:45:23 ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. 11:45:23 lftools 0.37.9 requires openstacksdk>=2.1.0, but you have openstacksdk 0.62.0 which is incompatible. 11:45:23 Python 3.10.6 11:45:24 pip 24.0 from /tmp/venv-VvqR/lib/python3.10/site-packages/pip (python 3.10) 11:45:24 appdirs==1.4.4 11:45:24 argcomplete==3.2.2 11:45:24 aspy.yaml==1.3.0 11:45:24 attrs==23.2.0 11:45:24 autopage==0.5.2 11:45:24 beautifulsoup4==4.12.3 11:45:24 boto3==1.34.46 11:45:24 botocore==1.34.46 11:45:24 bs4==0.0.2 11:45:24 cachetools==5.3.2 11:45:24 certifi==2024.2.2 11:45:24 cffi==1.16.0 11:45:24 cfgv==3.4.0 11:45:24 chardet==5.2.0 11:45:24 charset-normalizer==3.3.2 11:45:24 click==8.1.7 11:45:24 cliff==4.5.0 11:45:24 cmd2==2.4.3 11:45:24 cryptography==3.3.2 11:45:24 debtcollector==2.5.0 11:45:24 decorator==5.1.1 11:45:24 defusedxml==0.7.1 11:45:24 Deprecated==1.2.14 11:45:24 distlib==0.3.8 11:45:24 dnspython==2.6.1 11:45:24 docker==4.2.2 11:45:24 dogpile.cache==1.3.1 11:45:24 email-validator==2.1.0.post1 11:45:24 filelock==3.13.1 11:45:24 future==0.18.3 11:45:24 gitdb==4.0.11 11:45:24 GitPython==3.1.42 11:45:24 google-auth==2.28.0 11:45:24 httplib2==0.22.0 11:45:24 identify==2.5.35 11:45:24 idna==3.6 11:45:24 importlib-resources==1.5.0 11:45:24 iso8601==2.1.0 11:45:24 Jinja2==3.1.3 11:45:24 jmespath==1.0.1 11:45:24 jsonpatch==1.33 11:45:24 jsonpointer==2.4 11:45:24 jsonschema==4.21.1 11:45:24 jsonschema-specifications==2023.12.1 11:45:24 keystoneauth1==5.5.0 11:45:24 kubernetes==29.0.0 11:45:24 lftools==0.37.9 11:45:24 lxml==5.1.0 11:45:24 MarkupSafe==2.1.5 11:45:24 msgpack==1.0.7 11:45:24 multi_key_dict==2.0.3 11:45:24 munch==4.0.0 11:45:24 netaddr==1.2.1 11:45:24 netifaces==0.11.0 11:45:24 niet==1.4.2 11:45:24 nodeenv==1.8.0 11:45:24 oauth2client==4.1.3 11:45:24 oauthlib==3.2.2 11:45:24 openstacksdk==0.62.0 11:45:24 os-client-config==2.1.0 11:45:24 os-service-types==1.7.0 11:45:24 osc-lib==3.0.0 11:45:24 oslo.config==9.3.0 11:45:24 oslo.context==5.3.0 11:45:24 oslo.i18n==6.2.0 11:45:24 oslo.log==5.4.0 11:45:24 oslo.serialization==5.3.0 11:45:24 oslo.utils==7.0.0 11:45:24 packaging==23.2 11:45:24 pbr==6.0.0 11:45:24 platformdirs==4.2.0 11:45:24 prettytable==3.10.0 11:45:24 pyasn1==0.5.1 11:45:24 pyasn1-modules==0.3.0 11:45:24 pycparser==2.21 11:45:24 pygerrit2==2.0.15 11:45:24 PyGithub==2.2.0 11:45:24 pyinotify==0.9.6 11:45:24 PyJWT==2.8.0 11:45:24 PyNaCl==1.5.0 11:45:24 pyparsing==2.4.7 11:45:24 pyperclip==1.8.2 11:45:24 pyrsistent==0.20.0 11:45:24 python-cinderclient==9.4.0 11:45:24 python-dateutil==2.8.2 11:45:24 python-heatclient==3.4.0 11:45:24 python-jenkins==1.8.2 11:45:24 python-keystoneclient==5.3.0 11:45:24 python-magnumclient==4.3.0 11:45:24 python-novaclient==18.4.0 11:45:24 python-openstackclient==6.0.1 11:45:24 python-swiftclient==4.4.0 11:45:24 pytz==2024.1 11:45:24 PyYAML==6.0.1 11:45:24 referencing==0.33.0 11:45:24 requests==2.31.0 11:45:24 requests-oauthlib==1.3.1 11:45:24 requestsexceptions==1.4.0 11:45:24 rfc3986==2.0.0 11:45:24 rpds-py==0.18.0 11:45:24 rsa==4.9 11:45:24 ruamel.yaml==0.18.6 11:45:24 ruamel.yaml.clib==0.2.8 11:45:24 s3transfer==0.10.0 11:45:24 simplejson==3.19.2 11:45:24 six==1.16.0 11:45:24 smmap==5.0.1 11:45:24 soupsieve==2.5 11:45:24 stevedore==5.1.0 11:45:24 tabulate==0.9.0 11:45:24 toml==0.10.2 11:45:24 tomlkit==0.12.3 11:45:24 tqdm==4.66.2 11:45:24 typing_extensions==4.9.0 11:45:24 tzdata==2024.1 11:45:24 urllib3==1.26.18 11:45:24 virtualenv==20.25.0 11:45:24 wcwidth==0.2.13 11:45:24 websocket-client==1.7.0 11:45:24 wrapt==1.16.0 11:45:24 xdg==6.0.0 11:45:24 xmltodict==0.13.0 11:45:24 yq==3.2.3 11:45:24 [EnvInject] - Injecting environment variables from a build step. 11:45:24 [EnvInject] - Injecting as environment variables the properties content 11:45:24 SET_JDK_VERSION=openjdk17 11:45:24 GIT_URL="git://cloud.onap.org/mirror" 11:45:24 11:45:24 [EnvInject] - Variables injected successfully. 11:45:24 [policy-pap-master-project-csit-pap] $ /bin/sh /tmp/jenkins7669995307230790393.sh 11:45:24 ---> update-java-alternatives.sh 11:45:24 ---> Updating Java version 11:45:24 ---> Ubuntu/Debian system detected 11:45:25 update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/java to provide /usr/bin/java (java) in manual mode 11:45:25 update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/javac to provide /usr/bin/javac (javac) in manual mode 11:45:25 update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64 to provide /usr/lib/jvm/java-openjdk (java_sdk_openjdk) in manual mode 11:45:25 openjdk version "17.0.4" 2022-07-19 11:45:25 OpenJDK Runtime Environment (build 17.0.4+8-Ubuntu-118.04) 11:45:25 OpenJDK 64-Bit Server VM (build 17.0.4+8-Ubuntu-118.04, mixed mode, sharing) 11:45:25 JAVA_HOME=/usr/lib/jvm/java-17-openjdk-amd64 11:45:25 [EnvInject] - Injecting environment variables from a build step. 11:45:25 [EnvInject] - Injecting as environment variables the properties file path '/tmp/java.env' 11:45:25 [EnvInject] - Variables injected successfully. 11:45:25 [policy-pap-master-project-csit-pap] $ /bin/sh -xe /tmp/jenkins12709585660489702825.sh 11:45:25 + /w/workspace/policy-pap-master-project-csit-pap/csit/run-project-csit.sh pap 11:45:25 + set +u 11:45:25 + save_set 11:45:25 + RUN_CSIT_SAVE_SET=ehxB 11:45:25 + RUN_CSIT_SHELLOPTS=braceexpand:errexit:hashall:interactive-comments:pipefail:xtrace 11:45:25 + '[' 1 -eq 0 ']' 11:45:25 + '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' 11:45:25 + export PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 11:45:25 + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 11:45:25 + export SCRIPTS=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts 11:45:25 + SCRIPTS=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts 11:45:25 + export ROBOT_VARIABLES= 11:45:25 + ROBOT_VARIABLES= 11:45:25 + export PROJECT=pap 11:45:25 + PROJECT=pap 11:45:25 + cd /w/workspace/policy-pap-master-project-csit-pap 11:45:25 + rm -rf /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap 11:45:25 + mkdir -p /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap 11:45:25 + source_safely /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/prepare-robot-env.sh 11:45:25 + '[' -z /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/prepare-robot-env.sh ']' 11:45:25 + relax_set 11:45:25 + set +e 11:45:25 + set +o pipefail 11:45:25 + . /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/prepare-robot-env.sh 11:45:25 ++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' 11:45:25 +++ mktemp -d 11:45:25 ++ ROBOT_VENV=/tmp/tmp.hvN0kqIUZe 11:45:25 ++ echo ROBOT_VENV=/tmp/tmp.hvN0kqIUZe 11:45:25 +++ python3 --version 11:45:25 ++ echo 'Python version is: Python 3.6.9' 11:45:25 Python version is: Python 3.6.9 11:45:25 ++ python3 -m venv --clear /tmp/tmp.hvN0kqIUZe 11:45:26 ++ source /tmp/tmp.hvN0kqIUZe/bin/activate 11:45:26 +++ deactivate nondestructive 11:45:26 +++ '[' -n '' ']' 11:45:26 +++ '[' -n '' ']' 11:45:26 +++ '[' -n /bin/bash -o -n '' ']' 11:45:26 +++ hash -r 11:45:26 +++ '[' -n '' ']' 11:45:26 +++ unset VIRTUAL_ENV 11:45:26 +++ '[' '!' nondestructive = nondestructive ']' 11:45:26 +++ VIRTUAL_ENV=/tmp/tmp.hvN0kqIUZe 11:45:26 +++ export VIRTUAL_ENV 11:45:26 +++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 11:45:26 +++ PATH=/tmp/tmp.hvN0kqIUZe/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 11:45:26 +++ export PATH 11:45:26 +++ '[' -n '' ']' 11:45:26 +++ '[' -z '' ']' 11:45:26 +++ _OLD_VIRTUAL_PS1= 11:45:26 +++ '[' 'x(tmp.hvN0kqIUZe) ' '!=' x ']' 11:45:26 +++ PS1='(tmp.hvN0kqIUZe) ' 11:45:26 +++ export PS1 11:45:26 +++ '[' -n /bin/bash -o -n '' ']' 11:45:26 +++ hash -r 11:45:26 ++ set -exu 11:45:26 ++ python3 -m pip install -qq --upgrade 'pip<=23.0' 'setuptools<=66.1.1' 11:45:29 ++ echo 'Installing Python Requirements' 11:45:29 Installing Python Requirements 11:45:29 ++ python3 -m pip install -qq -r /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/pylibs.txt 11:45:47 ++ python3 -m pip -qq freeze 11:45:48 bcrypt==4.0.1 11:45:48 beautifulsoup4==4.12.3 11:45:48 bitarray==2.9.2 11:45:48 certifi==2024.2.2 11:45:48 cffi==1.15.1 11:45:48 charset-normalizer==2.0.12 11:45:48 cryptography==40.0.2 11:45:48 decorator==5.1.1 11:45:48 elasticsearch==7.17.9 11:45:48 elasticsearch-dsl==7.4.1 11:45:48 enum34==1.1.10 11:45:48 idna==3.6 11:45:48 importlib-resources==5.4.0 11:45:48 ipaddr==2.2.0 11:45:48 isodate==0.6.1 11:45:48 jmespath==0.10.0 11:45:48 jsonpatch==1.32 11:45:48 jsonpath-rw==1.4.0 11:45:48 jsonpointer==2.3 11:45:48 lxml==5.1.0 11:45:48 netaddr==0.8.0 11:45:48 netifaces==0.11.0 11:45:48 odltools==0.1.28 11:45:48 paramiko==3.4.0 11:45:48 pkg_resources==0.0.0 11:45:48 ply==3.11 11:45:48 pyang==2.6.0 11:45:48 pyangbind==0.8.1 11:45:48 pycparser==2.21 11:45:48 pyhocon==0.3.60 11:45:48 PyNaCl==1.5.0 11:45:48 pyparsing==3.1.1 11:45:48 python-dateutil==2.8.2 11:45:48 regex==2023.8.8 11:45:48 requests==2.27.1 11:45:48 robotframework==6.1.1 11:45:48 robotframework-httplibrary==0.4.2 11:45:48 robotframework-pythonlibcore==3.0.0 11:45:48 robotframework-requests==0.9.4 11:45:48 robotframework-selenium2library==3.0.0 11:45:48 robotframework-seleniumlibrary==5.1.3 11:45:48 robotframework-sshlibrary==3.8.0 11:45:48 scapy==2.5.0 11:45:48 scp==0.14.5 11:45:48 selenium==3.141.0 11:45:48 six==1.16.0 11:45:48 soupsieve==2.3.2.post1 11:45:48 urllib3==1.26.18 11:45:48 waitress==2.0.0 11:45:48 WebOb==1.8.7 11:45:48 WebTest==3.0.0 11:45:48 zipp==3.6.0 11:45:48 ++ mkdir -p /tmp/tmp.hvN0kqIUZe/src/onap 11:45:48 ++ rm -rf /tmp/tmp.hvN0kqIUZe/src/onap/testsuite 11:45:48 ++ python3 -m pip install -qq --upgrade --extra-index-url=https://nexus3.onap.org/repository/PyPi.staging/simple 'robotframework-onap==0.6.0.*' --pre 11:45:54 ++ echo 'Installing python confluent-kafka library' 11:45:54 Installing python confluent-kafka library 11:45:54 ++ python3 -m pip install -qq confluent-kafka 11:45:55 ++ echo 'Uninstall docker-py and reinstall docker.' 11:45:55 Uninstall docker-py and reinstall docker. 11:45:55 ++ python3 -m pip uninstall -y -qq docker 11:45:56 ++ python3 -m pip install -U -qq docker 11:45:57 ++ python3 -m pip -qq freeze 11:45:57 bcrypt==4.0.1 11:45:57 beautifulsoup4==4.12.3 11:45:57 bitarray==2.9.2 11:45:57 certifi==2024.2.2 11:45:57 cffi==1.15.1 11:45:57 charset-normalizer==2.0.12 11:45:57 confluent-kafka==2.3.0 11:45:57 cryptography==40.0.2 11:45:57 decorator==5.1.1 11:45:57 deepdiff==5.7.0 11:45:57 dnspython==2.2.1 11:45:57 docker==5.0.3 11:45:57 elasticsearch==7.17.9 11:45:57 elasticsearch-dsl==7.4.1 11:45:57 enum34==1.1.10 11:45:57 future==0.18.3 11:45:57 idna==3.6 11:45:57 importlib-resources==5.4.0 11:45:57 ipaddr==2.2.0 11:45:57 isodate==0.6.1 11:45:57 Jinja2==3.0.3 11:45:57 jmespath==0.10.0 11:45:57 jsonpatch==1.32 11:45:57 jsonpath-rw==1.4.0 11:45:57 jsonpointer==2.3 11:45:57 kafka-python==2.0.2 11:45:57 lxml==5.1.0 11:45:57 MarkupSafe==2.0.1 11:45:57 more-itertools==5.0.0 11:45:57 netaddr==0.8.0 11:45:57 netifaces==0.11.0 11:45:57 odltools==0.1.28 11:45:57 ordered-set==4.0.2 11:45:57 paramiko==3.4.0 11:45:57 pbr==6.0.0 11:45:57 pkg_resources==0.0.0 11:45:57 ply==3.11 11:45:57 protobuf==3.19.6 11:45:57 pyang==2.6.0 11:45:57 pyangbind==0.8.1 11:45:57 pycparser==2.21 11:45:57 pyhocon==0.3.60 11:45:57 PyNaCl==1.5.0 11:45:57 pyparsing==3.1.1 11:45:57 python-dateutil==2.8.2 11:45:57 PyYAML==6.0.1 11:45:57 regex==2023.8.8 11:45:57 requests==2.27.1 11:45:57 robotframework==6.1.1 11:45:57 robotframework-httplibrary==0.4.2 11:45:57 robotframework-onap==0.6.0.dev105 11:45:57 robotframework-pythonlibcore==3.0.0 11:45:57 robotframework-requests==0.9.4 11:45:57 robotframework-selenium2library==3.0.0 11:45:57 robotframework-seleniumlibrary==5.1.3 11:45:57 robotframework-sshlibrary==3.8.0 11:45:57 robotlibcore-temp==1.0.2 11:45:57 scapy==2.5.0 11:45:57 scp==0.14.5 11:45:57 selenium==3.141.0 11:45:57 six==1.16.0 11:45:57 soupsieve==2.3.2.post1 11:45:57 urllib3==1.26.18 11:45:57 waitress==2.0.0 11:45:57 WebOb==1.8.7 11:45:57 websocket-client==1.3.1 11:45:57 WebTest==3.0.0 11:45:57 zipp==3.6.0 11:45:57 ++ uname 11:45:57 ++ grep -q Linux 11:45:57 ++ sudo apt-get -y -qq install libxml2-utils 11:45:57 + load_set 11:45:57 + _setopts=ehuxB 11:45:57 ++ echo braceexpand:hashall:interactive-comments:nounset:xtrace 11:45:57 ++ tr : ' ' 11:45:57 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 11:45:57 + set +o braceexpand 11:45:57 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 11:45:57 + set +o hashall 11:45:57 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 11:45:57 + set +o interactive-comments 11:45:57 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 11:45:57 + set +o nounset 11:45:57 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 11:45:57 + set +o xtrace 11:45:57 ++ echo ehuxB 11:45:57 ++ sed 's/./& /g' 11:45:57 + for i in $(echo "$_setopts" | sed 's/./& /g') 11:45:57 + set +e 11:45:57 + for i in $(echo "$_setopts" | sed 's/./& /g') 11:45:57 + set +h 11:45:57 + for i in $(echo "$_setopts" | sed 's/./& /g') 11:45:57 + set +u 11:45:57 + for i in $(echo "$_setopts" | sed 's/./& /g') 11:45:57 + set +x 11:45:57 + source_safely /tmp/tmp.hvN0kqIUZe/bin/activate 11:45:57 + '[' -z /tmp/tmp.hvN0kqIUZe/bin/activate ']' 11:45:57 + relax_set 11:45:57 + set +e 11:45:57 + set +o pipefail 11:45:57 + . /tmp/tmp.hvN0kqIUZe/bin/activate 11:45:57 ++ deactivate nondestructive 11:45:57 ++ '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin ']' 11:45:57 ++ PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 11:45:57 ++ export PATH 11:45:57 ++ unset _OLD_VIRTUAL_PATH 11:45:57 ++ '[' -n '' ']' 11:45:57 ++ '[' -n /bin/bash -o -n '' ']' 11:45:57 ++ hash -r 11:45:57 ++ '[' -n '' ']' 11:45:57 ++ unset VIRTUAL_ENV 11:45:57 ++ '[' '!' nondestructive = nondestructive ']' 11:45:57 ++ VIRTUAL_ENV=/tmp/tmp.hvN0kqIUZe 11:45:57 ++ export VIRTUAL_ENV 11:45:57 ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 11:45:57 ++ PATH=/tmp/tmp.hvN0kqIUZe/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 11:45:57 ++ export PATH 11:45:57 ++ '[' -n '' ']' 11:45:57 ++ '[' -z '' ']' 11:45:57 ++ _OLD_VIRTUAL_PS1='(tmp.hvN0kqIUZe) ' 11:45:57 ++ '[' 'x(tmp.hvN0kqIUZe) ' '!=' x ']' 11:45:57 ++ PS1='(tmp.hvN0kqIUZe) (tmp.hvN0kqIUZe) ' 11:45:57 ++ export PS1 11:45:57 ++ '[' -n /bin/bash -o -n '' ']' 11:45:57 ++ hash -r 11:45:57 + load_set 11:45:57 + _setopts=hxB 11:45:57 ++ echo braceexpand:hashall:interactive-comments:xtrace 11:45:57 ++ tr : ' ' 11:45:57 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 11:45:57 + set +o braceexpand 11:45:57 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 11:45:57 + set +o hashall 11:45:57 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 11:45:57 + set +o interactive-comments 11:45:57 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 11:45:57 + set +o xtrace 11:45:57 ++ sed 's/./& /g' 11:45:57 ++ echo hxB 11:45:57 + for i in $(echo "$_setopts" | sed 's/./& /g') 11:45:57 + set +h 11:45:57 + for i in $(echo "$_setopts" | sed 's/./& /g') 11:45:57 + set +x 11:45:57 + export TEST_PLAN_DIR=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests 11:45:57 + TEST_PLAN_DIR=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests 11:45:57 + export TEST_OPTIONS= 11:45:57 + TEST_OPTIONS= 11:45:57 ++ mktemp -d 11:45:57 + WORKDIR=/tmp/tmp.jgsfe5r9Wz 11:45:57 + cd /tmp/tmp.jgsfe5r9Wz 11:45:57 + docker login -u docker -p docker nexus3.onap.org:10001 11:45:58 WARNING! Using --password via the CLI is insecure. Use --password-stdin. 11:45:58 WARNING! Your password will be stored unencrypted in /home/jenkins/.docker/config.json. 11:45:58 Configure a credential helper to remove this warning. See 11:45:58 https://docs.docker.com/engine/reference/commandline/login/#credentials-store 11:45:58 11:45:58 Login Succeeded 11:45:58 + SETUP=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh 11:45:58 + '[' -f /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh ']' 11:45:58 + echo 'Running setup script /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh' 11:45:58 Running setup script /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh 11:45:58 + source_safely /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh 11:45:58 + '[' -z /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh ']' 11:45:58 + relax_set 11:45:58 + set +e 11:45:58 + set +o pipefail 11:45:58 + . /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh 11:45:58 ++ source /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/node-templates.sh 11:45:58 +++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' 11:45:58 ++++ awk -F= '$1 == "defaultbranch" { print $2 }' /w/workspace/policy-pap-master-project-csit-pap/.gitreview 11:45:58 +++ GERRIT_BRANCH=master 11:45:58 +++ echo GERRIT_BRANCH=master 11:45:58 GERRIT_BRANCH=master 11:45:58 +++ rm -rf /w/workspace/policy-pap-master-project-csit-pap/models 11:45:58 +++ mkdir /w/workspace/policy-pap-master-project-csit-pap/models 11:45:58 +++ git clone -b master --single-branch https://github.com/onap/policy-models.git /w/workspace/policy-pap-master-project-csit-pap/models 11:45:58 Cloning into '/w/workspace/policy-pap-master-project-csit-pap/models'... 11:45:59 +++ export DATA=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies 11:45:59 +++ DATA=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies 11:45:59 +++ export NODETEMPLATES=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates 11:45:59 +++ NODETEMPLATES=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates 11:45:59 +++ sed -e 's!Measurement_vGMUX!ADifferentValue!' /w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies/vCPE.policy.monitoring.input.tosca.json 11:45:59 +++ sed -e 's!"version": "1.0.0"!"version": "2.0.0"!' -e 's!"policy-version": 1!"policy-version": 2!' /w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies/vCPE.policy.monitoring.input.tosca.json 11:45:59 ++ source /w/workspace/policy-pap-master-project-csit-pap/compose/start-compose.sh apex-pdp --grafana 11:45:59 +++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' 11:45:59 +++ COMPOSE_FOLDER=/w/workspace/policy-pap-master-project-csit-pap/compose 11:45:59 +++ grafana=false 11:45:59 +++ gui=false 11:45:59 +++ [[ 2 -gt 0 ]] 11:45:59 +++ key=apex-pdp 11:45:59 +++ case $key in 11:45:59 +++ echo apex-pdp 11:45:59 apex-pdp 11:45:59 +++ component=apex-pdp 11:45:59 +++ shift 11:45:59 +++ [[ 1 -gt 0 ]] 11:45:59 +++ key=--grafana 11:45:59 +++ case $key in 11:45:59 +++ grafana=true 11:45:59 +++ shift 11:45:59 +++ [[ 0 -gt 0 ]] 11:45:59 +++ cd /w/workspace/policy-pap-master-project-csit-pap/compose 11:45:59 +++ echo 'Configuring docker compose...' 11:45:59 Configuring docker compose... 11:45:59 +++ source export-ports.sh 11:45:59 +++ source get-versions.sh 11:46:01 +++ '[' -z pap ']' 11:46:01 +++ '[' -n apex-pdp ']' 11:46:01 +++ '[' apex-pdp == logs ']' 11:46:01 +++ '[' true = true ']' 11:46:01 +++ echo 'Starting apex-pdp application with Grafana' 11:46:01 Starting apex-pdp application with Grafana 11:46:01 +++ docker-compose up -d apex-pdp grafana 11:46:02 Creating network "compose_default" with the default driver 11:46:02 Pulling prometheus (nexus3.onap.org:10001/prom/prometheus:latest)... 11:46:02 latest: Pulling from prom/prometheus 11:46:05 Digest: sha256:beb5e30ffba08d9ae8a7961b9a2145fc8af6296ff2a4f463df7cd722fcbfc789 11:46:05 Status: Downloaded newer image for nexus3.onap.org:10001/prom/prometheus:latest 11:46:05 Pulling grafana (nexus3.onap.org:10001/grafana/grafana:latest)... 11:46:06 latest: Pulling from grafana/grafana 11:46:11 Digest: sha256:8640e5038e83ca4554ed56b9d76375158bcd51580238c6f5d8adaf3f20dd5379 11:46:11 Status: Downloaded newer image for nexus3.onap.org:10001/grafana/grafana:latest 11:46:11 Pulling mariadb (nexus3.onap.org:10001/mariadb:10.10.2)... 11:46:11 10.10.2: Pulling from mariadb 11:46:17 Digest: sha256:bfc25a68e113de43d0d112f5a7126df8e278579c3224e3923359e1c1d8d5ce6e 11:46:17 Status: Downloaded newer image for nexus3.onap.org:10001/mariadb:10.10.2 11:46:17 Pulling simulator (nexus3.onap.org:10001/onap/policy-models-simulator:3.1.2-SNAPSHOT)... 11:46:18 3.1.2-SNAPSHOT: Pulling from onap/policy-models-simulator 11:46:21 Digest: sha256:5772a5c551b30d73f901debb8dc38f305559b920e248a9ccb1dba3b880278a13 11:46:21 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-models-simulator:3.1.2-SNAPSHOT 11:46:21 Pulling zookeeper (confluentinc/cp-zookeeper:latest)... 11:46:22 latest: Pulling from confluentinc/cp-zookeeper 11:46:34 Digest: sha256:9babd1c0beaf93189982bdbb9fe4bf194a2730298b640c057817746c19838866 11:46:34 Status: Downloaded newer image for confluentinc/cp-zookeeper:latest 11:46:34 Pulling kafka (confluentinc/cp-kafka:latest)... 11:46:36 latest: Pulling from confluentinc/cp-kafka 11:46:52 Digest: sha256:24cdd3a7fa89d2bed150560ebea81ff1943badfa61e51d66bb541a6b0d7fb047 11:46:53 Status: Downloaded newer image for confluentinc/cp-kafka:latest 11:46:54 Pulling policy-db-migrator (nexus3.onap.org:10001/onap/policy-db-migrator:3.1.2-SNAPSHOT)... 11:47:01 3.1.2-SNAPSHOT: Pulling from onap/policy-db-migrator 11:47:07 Digest: sha256:d2876ccda69cc445de980a3d4765cb553f81049d67cc6056cfa9e5429597baa6 11:47:07 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-db-migrator:3.1.2-SNAPSHOT 11:47:07 Pulling api (nexus3.onap.org:10001/onap/policy-api:3.1.2-SNAPSHOT)... 11:47:08 3.1.2-SNAPSHOT: Pulling from onap/policy-api 11:47:10 Digest: sha256:71cc3c3555fddbd324c5ddec27e24db340b82732d2f6ce50eddcfdf6715a7ab2 11:47:10 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-api:3.1.2-SNAPSHOT 11:47:10 Pulling pap (nexus3.onap.org:10001/onap/policy-pap:3.1.2-SNAPSHOT)... 11:47:10 3.1.2-SNAPSHOT: Pulling from onap/policy-pap 11:47:13 Digest: sha256:448850bc9066413f6555e9c62d97da12eaa2c454a1304262987462aae46f4676 11:47:13 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-pap:3.1.2-SNAPSHOT 11:47:13 Pulling apex-pdp (nexus3.onap.org:10001/onap/policy-apex-pdp:3.1.2-SNAPSHOT)... 11:47:13 3.1.2-SNAPSHOT: Pulling from onap/policy-apex-pdp 11:47:19 Digest: sha256:8670bcaff746ebc196cef9125561eb167e1e65c7e2f8d374c0d8834d57564da4 11:47:19 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-apex-pdp:3.1.2-SNAPSHOT 11:47:19 Creating simulator ... 11:47:19 Creating compose_zookeeper_1 ... 11:47:19 Creating prometheus ... 11:47:19 Creating mariadb ... 11:47:32 Creating mariadb ... done 11:47:32 Creating policy-db-migrator ... 11:47:33 Creating compose_zookeeper_1 ... done 11:47:33 Creating kafka ... 11:47:34 Creating kafka ... done 11:47:35 Creating policy-db-migrator ... done 11:47:35 Creating policy-api ... 11:47:36 Creating policy-api ... done 11:47:36 Creating policy-pap ... 11:47:37 Creating policy-pap ... done 11:47:38 Creating simulator ... done 11:47:38 Creating policy-apex-pdp ... 11:47:39 Creating policy-apex-pdp ... done 11:47:40 Creating prometheus ... done 11:47:40 Creating grafana ... 11:47:41 Creating grafana ... done 11:47:41 +++ echo 'Prometheus server: http://localhost:30259' 11:47:41 Prometheus server: http://localhost:30259 11:47:41 +++ echo 'Grafana server: http://localhost:30269' 11:47:41 Grafana server: http://localhost:30269 11:47:41 +++ cd /w/workspace/policy-pap-master-project-csit-pap 11:47:41 ++ sleep 10 11:47:51 ++ unset http_proxy https_proxy 11:47:51 ++ bash /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/wait_for_rest.sh localhost 30003 11:47:51 Waiting for REST to come up on localhost port 30003... 11:47:51 NAMES STATUS 11:47:51 grafana Up 10 seconds 11:47:51 policy-apex-pdp Up 12 seconds 11:47:51 policy-pap Up 14 seconds 11:47:51 policy-api Up 15 seconds 11:47:51 kafka Up 17 seconds 11:47:51 mariadb Up 19 seconds 11:47:51 compose_zookeeper_1 Up 18 seconds 11:47:51 prometheus Up 11 seconds 11:47:51 simulator Up 13 seconds 11:47:56 NAMES STATUS 11:47:56 grafana Up 15 seconds 11:47:56 policy-apex-pdp Up 17 seconds 11:47:56 policy-pap Up 19 seconds 11:47:56 policy-api Up 20 seconds 11:47:56 kafka Up 22 seconds 11:47:56 mariadb Up 24 seconds 11:47:56 compose_zookeeper_1 Up 23 seconds 11:47:56 prometheus Up 16 seconds 11:47:56 simulator Up 18 seconds 11:48:01 NAMES STATUS 11:48:01 grafana Up 20 seconds 11:48:01 policy-apex-pdp Up 22 seconds 11:48:01 policy-pap Up 24 seconds 11:48:01 policy-api Up 25 seconds 11:48:01 kafka Up 27 seconds 11:48:01 mariadb Up 29 seconds 11:48:01 compose_zookeeper_1 Up 28 seconds 11:48:01 prometheus Up 21 seconds 11:48:01 simulator Up 23 seconds 11:48:06 NAMES STATUS 11:48:06 grafana Up 25 seconds 11:48:06 policy-apex-pdp Up 27 seconds 11:48:06 policy-pap Up 29 seconds 11:48:06 policy-api Up 30 seconds 11:48:06 kafka Up 32 seconds 11:48:06 mariadb Up 34 seconds 11:48:06 compose_zookeeper_1 Up 33 seconds 11:48:06 prometheus Up 26 seconds 11:48:06 simulator Up 28 seconds 11:48:11 NAMES STATUS 11:48:11 grafana Up 30 seconds 11:48:11 policy-apex-pdp Up 32 seconds 11:48:11 policy-pap Up 34 seconds 11:48:11 policy-api Up 35 seconds 11:48:11 kafka Up 37 seconds 11:48:11 mariadb Up 39 seconds 11:48:11 compose_zookeeper_1 Up 38 seconds 11:48:11 prometheus Up 31 seconds 11:48:11 simulator Up 33 seconds 11:48:11 ++ export 'SUITES=pap-test.robot 11:48:11 pap-slas.robot' 11:48:11 ++ SUITES='pap-test.robot 11:48:11 pap-slas.robot' 11:48:11 ++ ROBOT_VARIABLES='-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 11:48:11 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates' 11:48:11 + load_set 11:48:11 + _setopts=hxB 11:48:11 ++ echo braceexpand:hashall:interactive-comments:xtrace 11:48:11 ++ tr : ' ' 11:48:11 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 11:48:11 + set +o braceexpand 11:48:11 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 11:48:11 + set +o hashall 11:48:11 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 11:48:11 + set +o interactive-comments 11:48:11 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 11:48:11 + set +o xtrace 11:48:11 ++ echo hxB 11:48:11 ++ sed 's/./& /g' 11:48:11 + for i in $(echo "$_setopts" | sed 's/./& /g') 11:48:11 + set +h 11:48:11 + for i in $(echo "$_setopts" | sed 's/./& /g') 11:48:11 + set +x 11:48:11 + tee /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap/_sysinfo-1-after-setup.txt 11:48:11 + docker_stats 11:48:11 ++ uname -s 11:48:11 + '[' Linux == Darwin ']' 11:48:11 + sh -c 'top -bn1 | head -3' 11:48:12 top - 11:48:12 up 4 min, 0 users, load average: 3.30, 1.47, 0.59 11:48:12 Tasks: 211 total, 2 running, 130 sleeping, 0 stopped, 0 zombie 11:48:12 %Cpu(s): 12.3 us, 2.6 sy, 0.0 ni, 79.8 id, 5.2 wa, 0.0 hi, 0.1 si, 0.1 st 11:48:12 + echo 11:48:12 11:48:12 + sh -c 'free -h' 11:48:12 total used free shared buff/cache available 11:48:12 Mem: 31G 2.6G 22G 1.3M 6.0G 28G 11:48:12 Swap: 1.0G 0B 1.0G 11:48:12 + echo 11:48:12 11:48:12 + docker ps --format 'table {{ .Names }}\t{{ .Status }}' 11:48:12 NAMES STATUS 11:48:12 grafana Up 30 seconds 11:48:12 policy-apex-pdp Up 32 seconds 11:48:12 policy-pap Up 34 seconds 11:48:12 policy-api Up 36 seconds 11:48:12 kafka Up 37 seconds 11:48:12 mariadb Up 39 seconds 11:48:12 compose_zookeeper_1 Up 38 seconds 11:48:12 prometheus Up 31 seconds 11:48:12 simulator Up 33 seconds 11:48:12 + echo 11:48:12 + docker stats --no-stream 11:48:12 11:48:14 CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 11:48:14 69886185bd1e grafana 0.03% 56.5MiB / 31.41GiB 0.18% 18.3kB / 3.18kB 0B / 24.1MB 18 11:48:14 2763cb24824d policy-apex-pdp 135.18% 181MiB / 31.41GiB 0.56% 7.32kB / 6.99kB 0B / 0B 48 11:48:14 d084b816e76e policy-pap 1.32% 496MiB / 31.41GiB 1.54% 28.2kB / 29.9kB 0B / 154MB 61 11:48:14 4c89bf7ac2f1 policy-api 0.09% 492.1MiB / 31.41GiB 1.53% 1e+03kB / 710kB 0B / 0B 55 11:48:14 465d7b3d3779 kafka 74.38% 373.6MiB / 31.41GiB 1.16% 68.9kB / 72kB 0B / 475kB 83 11:48:14 9d17878e6639 mariadb 0.01% 102.6MiB / 31.41GiB 0.32% 996kB / 1.19MB 11.1MB / 47.9MB 43 11:48:14 d42d632ca263 compose_zookeeper_1 0.06% 99.85MiB / 31.41GiB 0.31% 55.7kB / 49.4kB 0B / 393kB 60 11:48:14 f7749d4dabe5 prometheus 0.00% 18.35MiB / 31.41GiB 0.06% 1.03kB / 158B 0B / 0B 13 11:48:14 3fe5234129c0 simulator 0.07% 119.4MiB / 31.41GiB 0.37% 1.15kB / 0B 0B / 0B 76 11:48:14 + echo 11:48:14 11:48:14 + cd /tmp/tmp.jgsfe5r9Wz 11:48:14 + echo 'Reading the testplan:' 11:48:14 Reading the testplan: 11:48:14 + echo 'pap-test.robot 11:48:14 pap-slas.robot' 11:48:14 + egrep -v '(^[[:space:]]*#|^[[:space:]]*$)' 11:48:14 + sed 's|^|/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/|' 11:48:14 + cat testplan.txt 11:48:14 /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot 11:48:14 /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot 11:48:14 ++ xargs 11:48:14 + SUITES='/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot' 11:48:14 + echo 'ROBOT_VARIABLES=-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 11:48:14 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates' 11:48:14 ROBOT_VARIABLES=-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 11:48:14 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates 11:48:14 + echo 'Starting Robot test suites /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot ...' 11:48:14 Starting Robot test suites /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot ... 11:48:14 + relax_set 11:48:14 + set +e 11:48:14 + set +o pipefail 11:48:14 + python3 -m robot.run -N pap -v WORKSPACE:/tmp -v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot 11:48:15 ============================================================================== 11:48:15 pap 11:48:15 ============================================================================== 11:48:15 pap.Pap-Test 11:48:15 ============================================================================== 11:48:16 LoadPolicy :: Create a policy named 'onap.restart.tca' and version... | PASS | 11:48:16 ------------------------------------------------------------------------------ 11:48:16 LoadPolicyWithMetadataSet :: Create a policy named 'operational.ap... | PASS | 11:48:16 ------------------------------------------------------------------------------ 11:48:17 LoadNodeTemplates :: Create node templates in database using speci... | PASS | 11:48:17 ------------------------------------------------------------------------------ 11:48:17 Healthcheck :: Verify policy pap health check | PASS | 11:48:17 ------------------------------------------------------------------------------ 11:48:37 Consolidated Healthcheck :: Verify policy consolidated health check | PASS | 11:48:37 ------------------------------------------------------------------------------ 11:48:38 Metrics :: Verify policy pap is exporting prometheus metrics | PASS | 11:48:38 ------------------------------------------------------------------------------ 11:48:38 AddPdpGroup :: Add a new PdpGroup named 'testGroup' in the policy ... | PASS | 11:48:38 ------------------------------------------------------------------------------ 11:48:38 QueryPdpGroupsBeforeActivation :: Verify PdpGroups before activation | PASS | 11:48:38 ------------------------------------------------------------------------------ 11:48:39 ActivatePdpGroup :: Change the state of PdpGroup named 'testGroup'... | PASS | 11:48:39 ------------------------------------------------------------------------------ 11:48:39 QueryPdpGroupsAfterActivation :: Verify PdpGroups after activation | PASS | 11:48:39 ------------------------------------------------------------------------------ 11:48:39 DeployPdpGroups :: Deploy policies in PdpGroups | PASS | 11:48:39 ------------------------------------------------------------------------------ 11:48:39 QueryPdpGroupsAfterDeploy :: Verify PdpGroups after undeploy | PASS | 11:48:39 ------------------------------------------------------------------------------ 11:48:39 QueryPolicyAuditAfterDeploy :: Verify policy audit record after de... | PASS | 11:48:39 ------------------------------------------------------------------------------ 11:48:40 QueryPolicyAuditWithMetadataSetAfterDeploy :: Verify policy audit ... | PASS | 11:48:40 ------------------------------------------------------------------------------ 11:48:40 UndeployPolicy :: Undeploy a policy named 'onap.restart.tca' from ... | PASS | 11:48:40 ------------------------------------------------------------------------------ 11:48:40 UndeployPolicyWithMetadataSet :: Undeploy a policy named 'operatio... | PASS | 11:48:40 ------------------------------------------------------------------------------ 11:48:40 QueryPdpGroupsAfterUndeploy :: Verify PdpGroups after undeploy | PASS | 11:48:40 ------------------------------------------------------------------------------ 11:49:00 QueryPolicyAuditAfterUnDeploy :: Verify policy audit record after ... | PASS | 11:49:00 ------------------------------------------------------------------------------ 11:49:01 QueryPolicyAuditWithMetadataSetAfterUnDeploy :: Verify policy audi... | PASS | 11:49:01 ------------------------------------------------------------------------------ 11:49:01 DeactivatePdpGroup :: Change the state of PdpGroup named 'testGrou... | PASS | 11:49:01 ------------------------------------------------------------------------------ 11:49:01 DeletePdpGroups :: Delete the PdpGroup named 'testGroup' from poli... | PASS | 11:49:01 ------------------------------------------------------------------------------ 11:49:01 QueryPdpGroupsAfterDelete :: Verify PdpGroups after delete | PASS | 11:49:01 ------------------------------------------------------------------------------ 11:49:01 pap.Pap-Test | PASS | 11:49:01 22 tests, 22 passed, 0 failed 11:49:01 ============================================================================== 11:49:01 pap.Pap-Slas 11:49:01 ============================================================================== 11:50:01 WaitForPrometheusServer :: Wait for Prometheus server to gather al... | PASS | 11:50:01 ------------------------------------------------------------------------------ 11:50:01 ValidateResponseTimeForHealthcheck :: Validate component healthche... | PASS | 11:50:01 ------------------------------------------------------------------------------ 11:50:01 ValidateResponseTimeForSystemHealthcheck :: Validate if system hea... | PASS | 11:50:01 ------------------------------------------------------------------------------ 11:50:01 ValidateResponseTimeQueryPolicyAudit :: Validate query audits resp... | PASS | 11:50:01 ------------------------------------------------------------------------------ 11:50:01 ValidateResponseTimeUpdateGroup :: Validate pdps/group response time | PASS | 11:50:01 ------------------------------------------------------------------------------ 11:50:01 ValidatePolicyDeploymentTime :: Check if deployment of policy is u... | PASS | 11:50:01 ------------------------------------------------------------------------------ 11:50:01 ValidateResponseTimeDeletePolicy :: Check if undeployment of polic... | PASS | 11:50:01 ------------------------------------------------------------------------------ 11:50:01 ValidateResponseTimeDeleteGroup :: Validate delete group response ... | PASS | 11:50:01 ------------------------------------------------------------------------------ 11:50:01 pap.Pap-Slas | PASS | 11:50:01 8 tests, 8 passed, 0 failed 11:50:01 ============================================================================== 11:50:01 pap | PASS | 11:50:01 30 tests, 30 passed, 0 failed 11:50:01 ============================================================================== 11:50:01 Output: /tmp/tmp.jgsfe5r9Wz/output.xml 11:50:01 Log: /tmp/tmp.jgsfe5r9Wz/log.html 11:50:01 Report: /tmp/tmp.jgsfe5r9Wz/report.html 11:50:01 + RESULT=0 11:50:01 + load_set 11:50:01 + _setopts=hxB 11:50:01 ++ echo braceexpand:hashall:interactive-comments:xtrace 11:50:01 ++ tr : ' ' 11:50:01 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 11:50:01 + set +o braceexpand 11:50:01 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 11:50:01 + set +o hashall 11:50:01 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 11:50:01 + set +o interactive-comments 11:50:01 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 11:50:01 + set +o xtrace 11:50:01 ++ echo hxB 11:50:01 ++ sed 's/./& /g' 11:50:01 + for i in $(echo "$_setopts" | sed 's/./& /g') 11:50:01 + set +h 11:50:01 + for i in $(echo "$_setopts" | sed 's/./& /g') 11:50:01 + set +x 11:50:01 + echo 'RESULT: 0' 11:50:01 RESULT: 0 11:50:01 + exit 0 11:50:01 + on_exit 11:50:01 + rc=0 11:50:01 + [[ -n /w/workspace/policy-pap-master-project-csit-pap ]] 11:50:01 + docker ps --format 'table {{ .Names }}\t{{ .Status }}' 11:50:01 NAMES STATUS 11:50:01 grafana Up 2 minutes 11:50:01 policy-apex-pdp Up 2 minutes 11:50:01 policy-pap Up 2 minutes 11:50:01 policy-api Up 2 minutes 11:50:01 kafka Up 2 minutes 11:50:01 mariadb Up 2 minutes 11:50:01 compose_zookeeper_1 Up 2 minutes 11:50:01 prometheus Up 2 minutes 11:50:01 simulator Up 2 minutes 11:50:01 + docker_stats 11:50:01 ++ uname -s 11:50:01 + '[' Linux == Darwin ']' 11:50:01 + sh -c 'top -bn1 | head -3' 11:50:02 top - 11:50:02 up 6 min, 0 users, load average: 0.67, 1.09, 0.54 11:50:02 Tasks: 201 total, 1 running, 129 sleeping, 0 stopped, 0 zombie 11:50:02 %Cpu(s): 10.3 us, 2.0 sy, 0.0 ni, 83.6 id, 4.0 wa, 0.0 hi, 0.1 si, 0.0 st 11:50:02 + echo 11:50:02 11:50:02 + sh -c 'free -h' 11:50:02 total used free shared buff/cache available 11:50:02 Mem: 31G 2.7G 22G 1.3M 6.0G 28G 11:50:02 Swap: 1.0G 0B 1.0G 11:50:02 + echo 11:50:02 11:50:02 + docker ps --format 'table {{ .Names }}\t{{ .Status }}' 11:50:02 NAMES STATUS 11:50:02 grafana Up 2 minutes 11:50:02 policy-apex-pdp Up 2 minutes 11:50:02 policy-pap Up 2 minutes 11:50:02 policy-api Up 2 minutes 11:50:02 kafka Up 2 minutes 11:50:02 mariadb Up 2 minutes 11:50:02 compose_zookeeper_1 Up 2 minutes 11:50:02 prometheus Up 2 minutes 11:50:02 simulator Up 2 minutes 11:50:02 + echo 11:50:02 11:50:02 + docker stats --no-stream 11:50:04 CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 11:50:04 69886185bd1e grafana 0.02% 64.03MiB / 31.41GiB 0.20% 19.6kB / 4.91kB 0B / 24.1MB 18 11:50:04 2763cb24824d policy-apex-pdp 1.45% 188.7MiB / 31.41GiB 0.59% 56.5kB / 90.8kB 0B / 0B 52 11:50:04 d084b816e76e policy-pap 1.62% 486.3MiB / 31.41GiB 1.51% 2.33MB / 816kB 0B / 154MB 65 11:50:04 4c89bf7ac2f1 policy-api 0.12% 541.4MiB / 31.41GiB 1.68% 2.49MB / 1.26MB 0B / 0B 58 11:50:04 465d7b3d3779 kafka 1.29% 397MiB / 31.41GiB 1.23% 238kB / 214kB 0B / 573kB 85 11:50:04 9d17878e6639 mariadb 0.02% 103.8MiB / 31.41GiB 0.32% 1.95MB / 4.77MB 11.1MB / 48.2MB 28 11:50:04 d42d632ca263 compose_zookeeper_1 0.08% 98.35MiB / 31.41GiB 0.31% 58.6kB / 50.9kB 0B / 393kB 60 11:50:04 f7749d4dabe5 prometheus 0.00% 24.88MiB / 31.41GiB 0.08% 191kB / 10.7kB 0B / 0B 14 11:50:04 3fe5234129c0 simulator 0.09% 119.6MiB / 31.41GiB 0.37% 1.45kB / 0B 0B / 0B 78 11:50:04 + echo 11:50:04 11:50:04 + source_safely /w/workspace/policy-pap-master-project-csit-pap/compose/stop-compose.sh 11:50:04 + '[' -z /w/workspace/policy-pap-master-project-csit-pap/compose/stop-compose.sh ']' 11:50:04 + relax_set 11:50:04 + set +e 11:50:04 + set +o pipefail 11:50:04 + . /w/workspace/policy-pap-master-project-csit-pap/compose/stop-compose.sh 11:50:04 ++ echo 'Shut down started!' 11:50:04 Shut down started! 11:50:04 ++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' 11:50:04 ++ COMPOSE_FOLDER=/w/workspace/policy-pap-master-project-csit-pap/compose 11:50:04 ++ cd /w/workspace/policy-pap-master-project-csit-pap/compose 11:50:04 ++ source export-ports.sh 11:50:04 ++ source get-versions.sh 11:50:06 ++ echo 'Collecting logs from docker compose containers...' 11:50:06 Collecting logs from docker compose containers... 11:50:06 ++ docker-compose logs 11:50:08 ++ cat docker_compose.log 11:50:08 Attaching to grafana, policy-apex-pdp, policy-pap, policy-api, kafka, policy-db-migrator, mariadb, compose_zookeeper_1, prometheus, simulator 11:50:08 zookeeper_1 | ===> User 11:50:08 zookeeper_1 | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) 11:50:08 zookeeper_1 | ===> Configuring ... 11:50:08 zookeeper_1 | ===> Running preflight checks ... 11:50:08 zookeeper_1 | ===> Check if /var/lib/zookeeper/data is writable ... 11:50:08 zookeeper_1 | ===> Check if /var/lib/zookeeper/log is writable ... 11:50:08 zookeeper_1 | ===> Launching ... 11:50:08 zookeeper_1 | ===> Launching zookeeper ... 11:50:08 zookeeper_1 | [2024-02-21 11:47:37,046] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 11:50:08 zookeeper_1 | [2024-02-21 11:47:37,052] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 11:50:08 zookeeper_1 | [2024-02-21 11:47:37,052] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 11:50:08 zookeeper_1 | [2024-02-21 11:47:37,052] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 11:50:08 zookeeper_1 | [2024-02-21 11:47:37,052] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 11:50:08 zookeeper_1 | [2024-02-21 11:47:37,054] INFO autopurge.snapRetainCount set to 3 (org.apache.zookeeper.server.DatadirCleanupManager) 11:50:08 zookeeper_1 | [2024-02-21 11:47:37,054] INFO autopurge.purgeInterval set to 0 (org.apache.zookeeper.server.DatadirCleanupManager) 11:50:08 zookeeper_1 | [2024-02-21 11:47:37,054] INFO Purge task is not scheduled. (org.apache.zookeeper.server.DatadirCleanupManager) 11:50:08 zookeeper_1 | [2024-02-21 11:47:37,054] WARN Either no config or no quorum defined in config, running in standalone mode (org.apache.zookeeper.server.quorum.QuorumPeerMain) 11:50:08 zookeeper_1 | [2024-02-21 11:47:37,055] INFO Log4j 1.2 jmx support not found; jmx disabled. (org.apache.zookeeper.jmx.ManagedUtil) 11:50:08 zookeeper_1 | [2024-02-21 11:47:37,056] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 11:50:08 zookeeper_1 | [2024-02-21 11:47:37,056] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 11:50:08 zookeeper_1 | [2024-02-21 11:47:37,056] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 11:50:08 zookeeper_1 | [2024-02-21 11:47:37,056] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 11:50:08 zookeeper_1 | [2024-02-21 11:47:37,056] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 11:50:08 zookeeper_1 | [2024-02-21 11:47:37,056] INFO Starting server (org.apache.zookeeper.server.ZooKeeperServerMain) 11:50:08 zookeeper_1 | [2024-02-21 11:47:37,067] INFO ServerMetrics initialized with provider org.apache.zookeeper.metrics.impl.DefaultMetricsProvider@26275bef (org.apache.zookeeper.server.ServerMetrics) 11:50:08 zookeeper_1 | [2024-02-21 11:47:37,070] INFO ACL digest algorithm is: SHA1 (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) 11:50:08 zookeeper_1 | [2024-02-21 11:47:37,070] INFO zookeeper.DigestAuthenticationProvider.enabled = true (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) 11:50:08 zookeeper_1 | [2024-02-21 11:47:37,072] INFO zookeeper.snapshot.trust.empty : false (org.apache.zookeeper.server.persistence.FileTxnSnapLog) 11:50:08 zookeeper_1 | [2024-02-21 11:47:37,082] INFO (org.apache.zookeeper.server.ZooKeeperServer) 11:50:08 zookeeper_1 | [2024-02-21 11:47:37,082] INFO ______ _ (org.apache.zookeeper.server.ZooKeeperServer) 11:50:08 zookeeper_1 | [2024-02-21 11:47:37,082] INFO |___ / | | (org.apache.zookeeper.server.ZooKeeperServer) 11:50:08 zookeeper_1 | [2024-02-21 11:47:37,082] INFO / / ___ ___ | | __ ___ ___ _ __ ___ _ __ (org.apache.zookeeper.server.ZooKeeperServer) 11:50:08 zookeeper_1 | [2024-02-21 11:47:37,082] INFO / / / _ \ / _ \ | |/ / / _ \ / _ \ | '_ \ / _ \ | '__| (org.apache.zookeeper.server.ZooKeeperServer) 11:50:08 zookeeper_1 | [2024-02-21 11:47:37,082] INFO / /__ | (_) | | (_) | | < | __/ | __/ | |_) | | __/ | | (org.apache.zookeeper.server.ZooKeeperServer) 11:50:08 zookeeper_1 | [2024-02-21 11:47:37,082] INFO /_____| \___/ \___/ |_|\_\ \___| \___| | .__/ \___| |_| (org.apache.zookeeper.server.ZooKeeperServer) 11:50:08 zookeeper_1 | [2024-02-21 11:47:37,082] INFO | | (org.apache.zookeeper.server.ZooKeeperServer) 11:50:08 zookeeper_1 | [2024-02-21 11:47:37,082] INFO |_| (org.apache.zookeeper.server.ZooKeeperServer) 11:50:08 zookeeper_1 | [2024-02-21 11:47:37,082] INFO (org.apache.zookeeper.server.ZooKeeperServer) 11:50:08 zookeeper_1 | [2024-02-21 11:47:37,083] INFO Server environment:zookeeper.version=3.8.3-6ad6d364c7c0bcf0de452d54ebefa3058098ab56, built on 2023-10-05 10:34 UTC (org.apache.zookeeper.server.ZooKeeperServer) 11:50:08 zookeeper_1 | [2024-02-21 11:47:37,083] INFO Server environment:host.name=d42d632ca263 (org.apache.zookeeper.server.ZooKeeperServer) 11:50:08 zookeeper_1 | [2024-02-21 11:47:37,083] INFO Server environment:java.version=11.0.21 (org.apache.zookeeper.server.ZooKeeperServer) 11:50:08 zookeeper_1 | [2024-02-21 11:47:37,083] INFO Server environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.server.ZooKeeperServer) 11:50:08 zookeeper_1 | [2024-02-21 11:47:37,083] INFO Server environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.server.ZooKeeperServer) 11:50:08 zookeeper_1 | [2024-02-21 11:47:37,083] INFO Server environment:java.class.path=/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/jose4j-0.9.3.jar:/usr/bin/../share/java/kafka/commons-validator-1.7.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.100.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/commons-io-2.11.0.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/connect-mirror-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.11.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.100.Final.jar:/usr/bin/../share/java/kafka/jline-3.22.0.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-tools-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/connect-json-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.100.Final.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.5-1.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/scala-library-2.13.11.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/reflections-0.10.2.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/commons-beanutils-1.9.4.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/connect-transforms-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/connect-api-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/commons-digester-2.1.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-raft-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.8.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/checker-qual-3.19.0.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/pcollections-4.0.1.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-clients-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/jsr305-3.0.2.jar:/usr/bin/../share/java/kafka/trogdor-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-shell-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.100.Final.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.3.jar:/usr/bin/../share/java/kafka/connect-runtime-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-storage-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.100.Final.jar:/usr/bin/../share/java/kafka/commons-collections-3.2.2.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/caffeine-2.9.3.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.1.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.3.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/netty-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.53.v20231009.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.server.ZooKeeperServer) 11:50:08 zookeeper_1 | [2024-02-21 11:47:37,083] INFO Server environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.server.ZooKeeperServer) 11:50:08 zookeeper_1 | [2024-02-21 11:47:37,084] INFO Server environment:java.io.tmpdir=/tmp (org.apache.zookeeper.server.ZooKeeperServer) 11:50:08 zookeeper_1 | [2024-02-21 11:47:37,084] INFO Server environment:java.compiler= (org.apache.zookeeper.server.ZooKeeperServer) 11:50:08 zookeeper_1 | [2024-02-21 11:47:37,084] INFO Server environment:os.name=Linux (org.apache.zookeeper.server.ZooKeeperServer) 11:50:08 zookeeper_1 | [2024-02-21 11:47:37,084] INFO Server environment:os.arch=amd64 (org.apache.zookeeper.server.ZooKeeperServer) 11:50:08 zookeeper_1 | [2024-02-21 11:47:37,084] INFO Server environment:os.version=4.15.0-192-generic (org.apache.zookeeper.server.ZooKeeperServer) 11:50:08 zookeeper_1 | [2024-02-21 11:47:37,084] INFO Server environment:user.name=appuser (org.apache.zookeeper.server.ZooKeeperServer) 11:50:08 zookeeper_1 | [2024-02-21 11:47:37,084] INFO Server environment:user.home=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) 11:50:08 zookeeper_1 | [2024-02-21 11:47:37,084] INFO Server environment:user.dir=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) 11:50:08 zookeeper_1 | [2024-02-21 11:47:37,084] INFO Server environment:os.memory.free=490MB (org.apache.zookeeper.server.ZooKeeperServer) 11:50:08 zookeeper_1 | [2024-02-21 11:47:37,084] INFO Server environment:os.memory.max=512MB (org.apache.zookeeper.server.ZooKeeperServer) 11:50:08 zookeeper_1 | [2024-02-21 11:47:37,084] INFO Server environment:os.memory.total=512MB (org.apache.zookeeper.server.ZooKeeperServer) 11:50:08 zookeeper_1 | [2024-02-21 11:47:37,084] INFO zookeeper.enableEagerACLCheck = false (org.apache.zookeeper.server.ZooKeeperServer) 11:50:08 zookeeper_1 | [2024-02-21 11:47:37,084] INFO zookeeper.digest.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) 11:50:08 zookeeper_1 | [2024-02-21 11:47:37,084] INFO zookeeper.closeSessionTxn.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) 11:50:08 zookeeper_1 | [2024-02-21 11:47:37,084] INFO zookeeper.flushDelay = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) 11:50:08 zookeeper_1 | [2024-02-21 11:47:37,084] INFO zookeeper.maxWriteQueuePollTime = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) 11:50:08 zookeeper_1 | [2024-02-21 11:47:37,084] INFO zookeeper.maxBatchSize=1000 (org.apache.zookeeper.server.ZooKeeperServer) 11:50:08 zookeeper_1 | [2024-02-21 11:47:37,084] INFO zookeeper.intBufferStartingSizeBytes = 1024 (org.apache.zookeeper.server.ZooKeeperServer) 11:50:08 zookeeper_1 | [2024-02-21 11:47:37,085] INFO Weighed connection throttling is disabled (org.apache.zookeeper.server.BlueThrottle) 11:50:08 zookeeper_1 | [2024-02-21 11:47:37,086] INFO minSessionTimeout set to 4000 ms (org.apache.zookeeper.server.ZooKeeperServer) 11:50:08 zookeeper_1 | [2024-02-21 11:47:37,086] INFO maxSessionTimeout set to 40000 ms (org.apache.zookeeper.server.ZooKeeperServer) 11:50:08 zookeeper_1 | [2024-02-21 11:47:37,087] INFO getData response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) 11:50:08 zookeeper_1 | [2024-02-21 11:47:37,087] INFO getChildren response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) 11:50:08 zookeeper_1 | [2024-02-21 11:47:37,088] INFO zookeeper.pathStats.slotCapacity = 60 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 11:50:08 zookeeper_1 | [2024-02-21 11:47:37,088] INFO zookeeper.pathStats.slotDuration = 15 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 11:50:08 zookeeper_1 | [2024-02-21 11:47:37,088] INFO zookeeper.pathStats.maxDepth = 6 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 11:50:08 zookeeper_1 | [2024-02-21 11:47:37,088] INFO zookeeper.pathStats.initialDelay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 11:50:08 zookeeper_1 | [2024-02-21 11:47:37,088] INFO zookeeper.pathStats.delay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 11:50:08 zookeeper_1 | [2024-02-21 11:47:37,088] INFO zookeeper.pathStats.enabled = false (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 11:50:08 zookeeper_1 | [2024-02-21 11:47:37,090] INFO The max bytes for all large requests are set to 104857600 (org.apache.zookeeper.server.ZooKeeperServer) 11:50:08 zookeeper_1 | [2024-02-21 11:47:37,090] INFO The large request threshold is set to -1 (org.apache.zookeeper.server.ZooKeeperServer) 11:50:08 zookeeper_1 | [2024-02-21 11:47:37,090] INFO zookeeper.enforce.auth.enabled = false (org.apache.zookeeper.server.AuthenticationHelper) 11:50:08 zookeeper_1 | [2024-02-21 11:47:37,091] INFO zookeeper.enforce.auth.schemes = [] (org.apache.zookeeper.server.AuthenticationHelper) 11:50:08 zookeeper_1 | [2024-02-21 11:47:37,091] INFO Created server with tickTime 2000 ms minSessionTimeout 4000 ms maxSessionTimeout 40000 ms clientPortListenBacklog -1 datadir /var/lib/zookeeper/log/version-2 snapdir /var/lib/zookeeper/data/version-2 (org.apache.zookeeper.server.ZooKeeperServer) 11:50:08 zookeeper_1 | [2024-02-21 11:47:37,110] INFO Logging initialized @702ms to org.eclipse.jetty.util.log.Slf4jLog (org.eclipse.jetty.util.log) 11:50:08 zookeeper_1 | [2024-02-21 11:47:37,189] WARN o.e.j.s.ServletContextHandler@5be1d0a4{/,null,STOPPED} contextPath ends with /* (org.eclipse.jetty.server.handler.ContextHandler) 11:50:08 zookeeper_1 | [2024-02-21 11:47:37,189] WARN Empty contextPath (org.eclipse.jetty.server.handler.ContextHandler) 11:50:08 zookeeper_1 | [2024-02-21 11:47:37,207] INFO jetty-9.4.53.v20231009; built: 2023-10-09T12:29:09.265Z; git: 27bde00a0b95a1d5bbee0eae7984f891d2d0f8c9; jvm 11.0.21+9-LTS (org.eclipse.jetty.server.Server) 11:50:08 zookeeper_1 | [2024-02-21 11:47:37,232] INFO DefaultSessionIdManager workerName=node0 (org.eclipse.jetty.server.session) 11:50:08 zookeeper_1 | [2024-02-21 11:47:37,232] INFO No SessionScavenger set, using defaults (org.eclipse.jetty.server.session) 11:50:08 zookeeper_1 | [2024-02-21 11:47:37,233] INFO node0 Scavenging every 660000ms (org.eclipse.jetty.server.session) 11:50:08 zookeeper_1 | [2024-02-21 11:47:37,235] WARN ServletContext@o.e.j.s.ServletContextHandler@5be1d0a4{/,null,STARTING} has uncovered http methods for path: /* (org.eclipse.jetty.security.SecurityHandler) 11:50:08 zookeeper_1 | [2024-02-21 11:47:37,243] INFO Started o.e.j.s.ServletContextHandler@5be1d0a4{/,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler) 11:50:08 zookeeper_1 | [2024-02-21 11:47:37,259] INFO Started ServerConnector@4f32a3ad{HTTP/1.1, (http/1.1)}{0.0.0.0:8080} (org.eclipse.jetty.server.AbstractConnector) 11:50:08 zookeeper_1 | [2024-02-21 11:47:37,260] INFO Started @851ms (org.eclipse.jetty.server.Server) 11:50:08 zookeeper_1 | [2024-02-21 11:47:37,260] INFO Started AdminServer on address 0.0.0.0, port 8080 and command URL /commands (org.apache.zookeeper.server.admin.JettyAdminServer) 11:50:08 zookeeper_1 | [2024-02-21 11:47:37,265] INFO Using org.apache.zookeeper.server.NIOServerCnxnFactory as server connection factory (org.apache.zookeeper.server.ServerCnxnFactory) 11:50:08 zookeeper_1 | [2024-02-21 11:47:37,265] WARN maxCnxns is not configured, using default value 0. (org.apache.zookeeper.server.ServerCnxnFactory) 11:50:08 zookeeper_1 | [2024-02-21 11:47:37,267] INFO Configuring NIO connection handler with 10s sessionless connection timeout, 2 selector thread(s), 16 worker threads, and 64 kB direct buffers. (org.apache.zookeeper.server.NIOServerCnxnFactory) 11:50:08 zookeeper_1 | [2024-02-21 11:47:37,268] INFO binding to port 0.0.0.0/0.0.0.0:2181 (org.apache.zookeeper.server.NIOServerCnxnFactory) 11:50:08 zookeeper_1 | [2024-02-21 11:47:37,286] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) 11:50:08 zookeeper_1 | [2024-02-21 11:47:37,286] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) 11:50:08 zookeeper_1 | [2024-02-21 11:47:37,287] INFO zookeeper.snapshotSizeFactor = 0.33 (org.apache.zookeeper.server.ZKDatabase) 11:50:08 zookeeper_1 | [2024-02-21 11:47:37,287] INFO zookeeper.commitLogCount=500 (org.apache.zookeeper.server.ZKDatabase) 11:50:08 zookeeper_1 | [2024-02-21 11:47:37,292] INFO zookeeper.snapshot.compression.method = CHECKED (org.apache.zookeeper.server.persistence.SnapStream) 11:50:08 zookeeper_1 | [2024-02-21 11:47:37,292] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) 11:50:08 zookeeper_1 | [2024-02-21 11:47:37,294] INFO Snapshot loaded in 7 ms, highest zxid is 0x0, digest is 1371985504 (org.apache.zookeeper.server.ZKDatabase) 11:50:08 zookeeper_1 | [2024-02-21 11:47:37,295] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) 11:50:08 zookeeper_1 | [2024-02-21 11:47:37,295] INFO Snapshot taken in 1 ms (org.apache.zookeeper.server.ZooKeeperServer) 11:50:08 zookeeper_1 | [2024-02-21 11:47:37,305] INFO zookeeper.request_throttler.shutdownTimeout = 10000 ms (org.apache.zookeeper.server.RequestThrottler) 11:50:08 zookeeper_1 | [2024-02-21 11:47:37,306] INFO PrepRequestProcessor (sid:0) started, reconfigEnabled=false (org.apache.zookeeper.server.PrepRequestProcessor) 11:50:08 zookeeper_1 | [2024-02-21 11:47:37,321] INFO Using checkIntervalMs=60000 maxPerMinute=10000 maxNeverUsedIntervalMs=0 (org.apache.zookeeper.server.ContainerManager) 11:50:08 zookeeper_1 | [2024-02-21 11:47:37,322] INFO ZooKeeper audit is disabled. (org.apache.zookeeper.audit.ZKAuditProvider) 11:50:08 zookeeper_1 | [2024-02-21 11:47:38,479] INFO Creating new log file: log.1 (org.apache.zookeeper.server.persistence.FileTxnLog) 11:50:08 kafka | ===> User 11:50:08 kafka | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) 11:50:08 kafka | ===> Configuring ... 11:50:08 kafka | Running in Zookeeper mode... 11:50:08 kafka | ===> Running preflight checks ... 11:50:08 kafka | ===> Check if /var/lib/kafka/data is writable ... 11:50:08 kafka | ===> Check if Zookeeper is healthy ... 11:50:08 kafka | SLF4J: Class path contains multiple SLF4J bindings. 11:50:08 kafka | SLF4J: Found binding in [jar:file:/usr/share/java/kafka/slf4j-reload4j-1.7.36.jar!/org/slf4j/impl/StaticLoggerBinder.class] 11:50:08 kafka | SLF4J: Found binding in [jar:file:/usr/share/java/cp-base-new/slf4j-reload4j-1.7.36.jar!/org/slf4j/impl/StaticLoggerBinder.class] 11:50:08 kafka | SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. 11:50:08 kafka | SLF4J: Actual binding is of type [org.slf4j.impl.Reload4jLoggerFactory] 11:50:08 kafka | [2024-02-21 11:47:38,410] INFO Client environment:zookeeper.version=3.8.3-6ad6d364c7c0bcf0de452d54ebefa3058098ab56, built on 2023-10-05 10:34 UTC (org.apache.zookeeper.ZooKeeper) 11:50:08 kafka | [2024-02-21 11:47:38,410] INFO Client environment:host.name=465d7b3d3779 (org.apache.zookeeper.ZooKeeper) 11:50:08 kafka | [2024-02-21 11:47:38,410] INFO Client environment:java.version=11.0.21 (org.apache.zookeeper.ZooKeeper) 11:50:08 kafka | [2024-02-21 11:47:38,410] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) 11:50:08 kafka | [2024-02-21 11:47:38,410] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) 11:50:08 kafka | [2024-02-21 11:47:38,410] INFO Client environment:java.class.path=/usr/share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/share/java/kafka/jersey-common-2.39.1.jar:/usr/share/java/kafka/swagger-annotations-2.2.8.jar:/usr/share/java/kafka/jose4j-0.9.3.jar:/usr/share/java/kafka/commons-validator-1.7.jar:/usr/share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/share/java/kafka/netty-transport-4.1.100.Final.jar:/usr/share/java/kafka/rocksdbjni-7.9.2.jar:/usr/share/java/kafka/jackson-annotations-2.13.5.jar:/usr/share/java/kafka/commons-io-2.11.0.jar:/usr/share/java/kafka/javax.activation-api-1.2.0.jar:/usr/share/java/kafka/connect-mirror-7.6.0-ccs.jar:/usr/share/java/kafka/commons-cli-1.4.jar:/usr/share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/share/java/kafka/scala-reflect-2.13.11.jar:/usr/share/java/kafka/netty-buffer-4.1.100.Final.jar:/usr/share/java/kafka/jline-3.22.0.jar:/usr/share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/share/java/kafka/kafka-server-common-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-tools-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-log4j-appender-7.6.0-ccs.jar:/usr/share/java/kafka/connect-json-7.6.0-ccs.jar:/usr/share/java/kafka/netty-resolver-4.1.100.Final.jar:/usr/share/java/kafka/hk2-api-2.6.1.jar:/usr/share/java/kafka/zstd-jni-1.5.5-1.jar:/usr/share/java/kafka/jetty-http-9.4.53.v20231009.jar:/usr/share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/share/java/kafka/kafka.jar:/usr/share/java/kafka/jetty-servlets-9.4.53.v20231009.jar:/usr/share/java/kafka/scala-library-2.13.11.jar:/usr/share/java/kafka/jakarta.inject-2.6.1.jar:/usr/share/java/kafka/jetty-continuation-9.4.53.v20231009.jar:/usr/share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/share/java/kafka/connect-basic-auth-extension-7.6.0-ccs.jar:/usr/share/java/kafka/hk2-locator-2.6.1.jar:/usr/share/java/kafka/reflections-0.10.2.jar:/usr/share/java/kafka/slf4j-api-1.7.36.jar:/usr/share/java/kafka/jetty-server-9.4.53.v20231009.jar:/usr/share/java/kafka/paranamer-2.8.jar:/usr/share/java/kafka/commons-beanutils-1.9.4.jar:/usr/share/java/kafka/jaxb-api-2.3.1.jar:/usr/share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/share/java/kafka/hk2-utils-2.6.1.jar:/usr/share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/share/java/kafka/kafka-streams-examples-7.6.0-ccs.jar:/usr/share/java/kafka/reload4j-1.2.25.jar:/usr/share/java/kafka/connect-transforms-7.6.0-ccs.jar:/usr/share/java/kafka/jackson-core-2.13.5.jar:/usr/share/java/kafka/jersey-hk2-2.39.1.jar:/usr/share/java/kafka/jackson-databind-2.13.5.jar:/usr/share/java/kafka/jersey-client-2.39.1.jar:/usr/share/java/kafka/jetty-io-9.4.53.v20231009.jar:/usr/share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/share/java/kafka/connect-api-7.6.0-ccs.jar:/usr/share/java/kafka/commons-digester-2.1.jar:/usr/share/java/kafka/netty-transport-native-epoll-4.1.100.Final.jar:/usr/share/java/kafka/argparse4j-0.7.0.jar:/usr/share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/share/java/kafka/kafka-raft-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-streams-scala_2.13-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-group-coordinator-7.6.0-ccs.jar:/usr/share/java/kafka/audience-annotations-0.12.0.jar:/usr/share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/share/java/kafka/kafka-storage-api-7.6.0-ccs.jar:/usr/share/java/kafka/maven-artifact-3.8.8.jar:/usr/share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/share/java/kafka/kafka-metadata-7.6.0-ccs.jar:/usr/share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/share/java/kafka/jersey-server-2.39.1.jar:/usr/share/java/kafka/commons-lang3-3.8.1.jar:/usr/share/java/kafka/kafka-streams-test-utils-7.6.0-ccs.jar:/usr/share/java/kafka/jopt-simple-5.0.4.jar:/usr/share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/share/java/kafka/lz4-java-1.8.0.jar:/usr/share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/share/java/kafka/checker-qual-3.19.0.jar:/usr/share/java/kafka/connect-mirror-client-7.6.0-ccs.jar:/usr/share/java/kafka/pcollections-4.0.1.jar:/usr/share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/share/java/kafka/jetty-client-9.4.53.v20231009.jar:/usr/share/java/kafka/jetty-util-9.4.53.v20231009.jar:/usr/share/java/kafka/kafka-tools-api-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-streams-7.6.0-ccs.jar:/usr/share/java/kafka/kafka_2.13-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-clients-7.6.0-ccs.jar:/usr/share/java/kafka/commons-logging-1.2.jar:/usr/share/java/kafka/jsr305-3.0.2.jar:/usr/share/java/kafka/trogdor-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-shell-7.6.0-ccs.jar:/usr/share/java/kafka/netty-codec-4.1.100.Final.jar:/usr/share/java/kafka/netty-transport-native-unix-common-4.1.100.Final.jar:/usr/share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/share/java/kafka/metrics-core-2.2.0.jar:/usr/share/java/kafka/zookeeper-jute-3.8.3.jar:/usr/share/java/kafka/connect-runtime-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-storage-7.6.0-ccs.jar:/usr/share/java/kafka/netty-handler-4.1.100.Final.jar:/usr/share/java/kafka/commons-collections-3.2.2.jar:/usr/share/java/kafka/javassist-3.29.2-GA.jar:/usr/share/java/kafka/caffeine-2.9.3.jar:/usr/share/java/kafka/plexus-utils-3.3.1.jar:/usr/share/java/kafka/zookeeper-3.8.3.jar:/usr/share/java/kafka/activation-1.1.1.jar:/usr/share/java/kafka/netty-common-4.1.100.Final.jar:/usr/share/java/kafka/metrics-core-4.1.12.1.jar:/usr/share/java/kafka/jetty-servlet-9.4.53.v20231009.jar:/usr/share/java/kafka/jetty-util-ajax-9.4.53.v20231009.jar:/usr/share/java/kafka/snappy-java-1.1.10.5.jar:/usr/share/java/kafka/netty-transport-classes-epoll-4.1.100.Final.jar:/usr/share/java/kafka/jetty-security-9.4.53.v20231009.jar:/usr/share/java/cp-base-new/scala-logging_2.13-3.9.4.jar:/usr/share/java/cp-base-new/jackson-datatype-jdk8-2.14.2.jar:/usr/share/java/cp-base-new/jose4j-0.9.3.jar:/usr/share/java/cp-base-new/commons-validator-1.7.jar:/usr/share/java/cp-base-new/logredactor-1.0.12.jar:/usr/share/java/cp-base-new/jolokia-core-1.7.1.jar:/usr/share/java/cp-base-new/re2j-1.6.jar:/usr/share/java/cp-base-new/scala-library-2.13.10.jar:/usr/share/java/cp-base-new/commons-io-2.11.0.jar:/usr/share/java/cp-base-new/commons-cli-1.4.jar:/usr/share/java/cp-base-new/slf4j-reload4j-1.7.36.jar:/usr/share/java/cp-base-new/jackson-annotations-2.14.2.jar:/usr/share/java/cp-base-new/json-simple-1.1.1.jar:/usr/share/java/cp-base-new/jackson-module-scala_2.13-2.14.2.jar:/usr/share/java/cp-base-new/scala-java8-compat_2.13-1.0.2.jar:/usr/share/java/cp-base-new/kafka-server-common-7.6.0-ccs.jar:/usr/share/java/cp-base-new/minimal-json-0.9.5.jar:/usr/share/java/cp-base-new/zstd-jni-1.5.5-1.jar:/usr/share/java/cp-base-new/jackson-dataformat-yaml-2.14.2.jar:/usr/share/java/cp-base-new/slf4j-api-1.7.36.jar:/usr/share/java/cp-base-new/paranamer-2.8.jar:/usr/share/java/cp-base-new/commons-beanutils-1.9.4.jar:/usr/share/java/cp-base-new/jmx_prometheus_javaagent-0.18.0.jar:/usr/share/java/cp-base-new/reload4j-1.2.25.jar:/usr/share/java/cp-base-new/jackson-core-2.14.2.jar:/usr/share/java/cp-base-new/common-utils-7.6.0.jar:/usr/share/java/cp-base-new/commons-digester-2.1.jar:/usr/share/java/cp-base-new/argparse4j-0.7.0.jar:/usr/share/java/cp-base-new/kafka-raft-7.6.0-ccs.jar:/usr/share/java/cp-base-new/kafka-group-coordinator-7.6.0-ccs.jar:/usr/share/java/cp-base-new/audience-annotations-0.12.0.jar:/usr/share/java/cp-base-new/gson-2.9.0.jar:/usr/share/java/cp-base-new/snakeyaml-2.0.jar:/usr/share/java/cp-base-new/kafka-storage-api-7.6.0-ccs.jar:/usr/share/java/cp-base-new/kafka-metadata-7.6.0-ccs.jar:/usr/share/java/cp-base-new/disk-usage-agent-7.6.0.jar:/usr/share/java/cp-base-new/jopt-simple-5.0.4.jar:/usr/share/java/cp-base-new/error_prone_annotations-2.10.0.jar:/usr/share/java/cp-base-new/lz4-java-1.8.0.jar:/usr/share/java/cp-base-new/logredactor-metrics-1.0.12.jar:/usr/share/java/cp-base-new/checker-qual-3.19.0.jar:/usr/share/java/cp-base-new/pcollections-4.0.1.jar:/usr/share/java/cp-base-new/kafka-tools-api-7.6.0-ccs.jar:/usr/share/java/cp-base-new/kafka_2.13-7.6.0-ccs.jar:/usr/share/java/cp-base-new/kafka-clients-7.6.0-ccs.jar:/usr/share/java/cp-base-new/commons-logging-1.2.jar:/usr/share/java/cp-base-new/scala-reflect-2.13.10.jar:/usr/share/java/cp-base-new/scala-collection-compat_2.13-2.10.0.jar:/usr/share/java/cp-base-new/metrics-core-2.2.0.jar:/usr/share/java/cp-base-new/zookeeper-jute-3.8.3.jar:/usr/share/java/cp-base-new/jackson-dataformat-csv-2.14.2.jar:/usr/share/java/cp-base-new/utility-belt-7.6.0.jar:/usr/share/java/cp-base-new/kafka-storage-7.6.0-ccs.jar:/usr/share/java/cp-base-new/commons-collections-3.2.2.jar:/usr/share/java/cp-base-new/caffeine-2.9.3.jar:/usr/share/java/cp-base-new/jolokia-jvm-1.7.1.jar:/usr/share/java/cp-base-new/zookeeper-3.8.3.jar:/usr/share/java/cp-base-new/metrics-core-4.1.12.1.jar:/usr/share/java/cp-base-new/jackson-databind-2.14.2.jar:/usr/share/java/cp-base-new/snappy-java-1.1.10.5.jar (org.apache.zookeeper.ZooKeeper) 11:50:08 kafka | [2024-02-21 11:47:38,410] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) 11:50:08 kafka | [2024-02-21 11:47:38,410] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) 11:50:08 kafka | [2024-02-21 11:47:38,410] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) 11:50:08 kafka | [2024-02-21 11:47:38,410] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) 11:50:08 kafka | [2024-02-21 11:47:38,410] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) 11:50:08 kafka | [2024-02-21 11:47:38,410] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) 11:50:08 kafka | [2024-02-21 11:47:38,410] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) 11:50:08 kafka | [2024-02-21 11:47:38,410] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) 11:50:08 kafka | [2024-02-21 11:47:38,410] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) 11:50:08 kafka | [2024-02-21 11:47:38,410] INFO Client environment:os.memory.free=487MB (org.apache.zookeeper.ZooKeeper) 11:50:08 kafka | [2024-02-21 11:47:38,411] INFO Client environment:os.memory.max=8042MB (org.apache.zookeeper.ZooKeeper) 11:50:08 kafka | [2024-02-21 11:47:38,411] INFO Client environment:os.memory.total=504MB (org.apache.zookeeper.ZooKeeper) 11:50:08 kafka | [2024-02-21 11:47:38,414] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=40000 watcher=io.confluent.admin.utils.ZookeeperConnectionWatcher@184cf7cf (org.apache.zookeeper.ZooKeeper) 11:50:08 kafka | [2024-02-21 11:47:38,417] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) 11:50:08 kafka | [2024-02-21 11:47:38,422] INFO jute.maxbuffer value is 1048575 Bytes (org.apache.zookeeper.ClientCnxnSocket) 11:50:08 kafka | [2024-02-21 11:47:38,429] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) 11:50:08 kafka | [2024-02-21 11:47:38,452] INFO Opening socket connection to server zookeeper/172.17.0.2:2181. (org.apache.zookeeper.ClientCnxn) 11:50:08 kafka | [2024-02-21 11:47:38,452] INFO SASL config status: Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn) 11:50:08 kafka | [2024-02-21 11:47:38,459] INFO Socket connection established, initiating session, client: /172.17.0.7:57840, server: zookeeper/172.17.0.2:2181 (org.apache.zookeeper.ClientCnxn) 11:50:08 kafka | [2024-02-21 11:47:38,505] INFO Session establishment complete on server zookeeper/172.17.0.2:2181, session id = 0x1000004036c0000, negotiated timeout = 40000 (org.apache.zookeeper.ClientCnxn) 11:50:08 kafka | [2024-02-21 11:47:38,630] INFO Session: 0x1000004036c0000 closed (org.apache.zookeeper.ZooKeeper) 11:50:08 kafka | [2024-02-21 11:47:38,631] INFO EventThread shut down for session: 0x1000004036c0000 (org.apache.zookeeper.ClientCnxn) 11:50:08 kafka | Using log4j config /etc/kafka/log4j.properties 11:50:08 kafka | ===> Launching ... 11:50:08 kafka | ===> Launching kafka ... 11:50:08 kafka | [2024-02-21 11:47:39,370] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$) 11:50:08 kafka | [2024-02-21 11:47:39,746] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) 11:50:08 kafka | [2024-02-21 11:47:39,828] INFO Registered signal handlers for TERM, INT, HUP (org.apache.kafka.common.utils.LoggingSignalHandler) 11:50:08 kafka | [2024-02-21 11:47:39,829] INFO starting (kafka.server.KafkaServer) 11:50:08 kafka | [2024-02-21 11:47:39,830] INFO Connecting to zookeeper on zookeeper:2181 (kafka.server.KafkaServer) 11:50:08 kafka | [2024-02-21 11:47:39,849] INFO [ZooKeeperClient Kafka server] Initializing a new session to zookeeper:2181. (kafka.zookeeper.ZooKeeperClient) 11:50:08 kafka | [2024-02-21 11:47:39,855] INFO Client environment:zookeeper.version=3.8.3-6ad6d364c7c0bcf0de452d54ebefa3058098ab56, built on 2023-10-05 10:34 UTC (org.apache.zookeeper.ZooKeeper) 11:50:08 kafka | [2024-02-21 11:47:39,855] INFO Client environment:host.name=465d7b3d3779 (org.apache.zookeeper.ZooKeeper) 11:50:08 kafka | [2024-02-21 11:47:39,855] INFO Client environment:java.version=11.0.21 (org.apache.zookeeper.ZooKeeper) 11:50:08 kafka | [2024-02-21 11:47:39,855] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) 11:50:08 kafka | [2024-02-21 11:47:39,855] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) 11:50:08 kafka | [2024-02-21 11:47:39,855] INFO Client environment:java.class.path=/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/jose4j-0.9.3.jar:/usr/bin/../share/java/kafka/commons-validator-1.7.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.100.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/commons-io-2.11.0.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/connect-mirror-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.11.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.100.Final.jar:/usr/bin/../share/java/kafka/jline-3.22.0.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-tools-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/connect-json-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.100.Final.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.5-1.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/scala-library-2.13.11.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/reflections-0.10.2.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/commons-beanutils-1.9.4.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/connect-transforms-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/connect-api-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/commons-digester-2.1.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-raft-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.8.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/checker-qual-3.19.0.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/pcollections-4.0.1.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-clients-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/jsr305-3.0.2.jar:/usr/bin/../share/java/kafka/trogdor-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-shell-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.100.Final.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.3.jar:/usr/bin/../share/java/kafka/connect-runtime-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-storage-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.100.Final.jar:/usr/bin/../share/java/kafka/commons-collections-3.2.2.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/caffeine-2.9.3.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.1.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.3.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/netty-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.53.v20231009.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.ZooKeeper) 11:50:08 kafka | [2024-02-21 11:47:39,856] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) 11:50:08 kafka | [2024-02-21 11:47:39,856] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) 11:50:08 kafka | [2024-02-21 11:47:39,856] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) 11:50:08 kafka | [2024-02-21 11:47:39,856] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) 11:50:08 kafka | [2024-02-21 11:47:39,856] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) 11:50:08 kafka | [2024-02-21 11:47:39,856] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) 11:50:08 kafka | [2024-02-21 11:47:39,856] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) 11:50:08 kafka | [2024-02-21 11:47:39,856] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) 11:50:08 kafka | [2024-02-21 11:47:39,856] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) 11:50:08 kafka | [2024-02-21 11:47:39,856] INFO Client environment:os.memory.free=1007MB (org.apache.zookeeper.ZooKeeper) 11:50:08 kafka | [2024-02-21 11:47:39,856] INFO Client environment:os.memory.max=1024MB (org.apache.zookeeper.ZooKeeper) 11:50:08 kafka | [2024-02-21 11:47:39,856] INFO Client environment:os.memory.total=1024MB (org.apache.zookeeper.ZooKeeper) 11:50:08 kafka | [2024-02-21 11:47:39,858] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=18000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@5b619d14 (org.apache.zookeeper.ZooKeeper) 11:50:08 kafka | [2024-02-21 11:47:39,863] INFO jute.maxbuffer value is 4194304 Bytes (org.apache.zookeeper.ClientCnxnSocket) 11:50:08 kafka | [2024-02-21 11:47:39,869] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) 11:50:08 kafka | [2024-02-21 11:47:39,881] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient) 11:50:08 kafka | [2024-02-21 11:47:39,883] INFO Opening socket connection to server zookeeper/172.17.0.2:2181. (org.apache.zookeeper.ClientCnxn) 11:50:08 kafka | [2024-02-21 11:47:39,891] INFO Socket connection established, initiating session, client: /172.17.0.7:57842, server: zookeeper/172.17.0.2:2181 (org.apache.zookeeper.ClientCnxn) 11:50:08 kafka | [2024-02-21 11:47:39,901] INFO Session establishment complete on server zookeeper/172.17.0.2:2181, session id = 0x1000004036c0001, negotiated timeout = 18000 (org.apache.zookeeper.ClientCnxn) 11:50:08 kafka | [2024-02-21 11:47:39,907] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient) 11:50:08 kafka | [2024-02-21 11:47:40,205] INFO Cluster ID = NROpzKGmRGeJsBLulqXClg (kafka.server.KafkaServer) 11:50:08 kafka | [2024-02-21 11:47:40,208] WARN No meta.properties file under dir /var/lib/kafka/data/meta.properties (kafka.server.BrokerMetadataCheckpoint) 11:50:08 kafka | [2024-02-21 11:47:40,268] INFO KafkaConfig values: 11:50:08 kafka | advertised.listeners = PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092 11:50:08 kafka | alter.config.policy.class.name = null 11:50:08 kafka | alter.log.dirs.replication.quota.window.num = 11 11:50:08 kafka | alter.log.dirs.replication.quota.window.size.seconds = 1 11:50:08 kafka | authorizer.class.name = 11:50:08 kafka | auto.create.topics.enable = true 11:50:08 kafka | auto.include.jmx.reporter = true 11:50:08 kafka | auto.leader.rebalance.enable = true 11:50:08 kafka | background.threads = 10 11:50:08 kafka | broker.heartbeat.interval.ms = 2000 11:50:08 kafka | broker.id = 1 11:50:08 kafka | broker.id.generation.enable = true 11:50:08 kafka | broker.rack = null 11:50:08 kafka | broker.session.timeout.ms = 9000 11:50:08 kafka | client.quota.callback.class = null 11:50:08 kafka | compression.type = producer 11:50:08 kafka | connection.failed.authentication.delay.ms = 100 11:50:08 kafka | connections.max.idle.ms = 600000 11:50:08 kafka | connections.max.reauth.ms = 0 11:50:08 kafka | control.plane.listener.name = null 11:50:08 kafka | controlled.shutdown.enable = true 11:50:08 kafka | controlled.shutdown.max.retries = 3 11:50:08 kafka | controlled.shutdown.retry.backoff.ms = 5000 11:50:08 kafka | controller.listener.names = null 11:50:08 kafka | controller.quorum.append.linger.ms = 25 11:50:08 kafka | controller.quorum.election.backoff.max.ms = 1000 11:50:08 kafka | controller.quorum.election.timeout.ms = 1000 11:50:08 kafka | controller.quorum.fetch.timeout.ms = 2000 11:50:08 kafka | controller.quorum.request.timeout.ms = 2000 11:50:08 kafka | controller.quorum.retry.backoff.ms = 20 11:50:08 kafka | controller.quorum.voters = [] 11:50:08 kafka | controller.quota.window.num = 11 11:50:08 kafka | controller.quota.window.size.seconds = 1 11:50:08 kafka | controller.socket.timeout.ms = 30000 11:50:08 kafka | create.topic.policy.class.name = null 11:50:08 kafka | default.replication.factor = 1 11:50:08 kafka | delegation.token.expiry.check.interval.ms = 3600000 11:50:08 kafka | delegation.token.expiry.time.ms = 86400000 11:50:08 kafka | delegation.token.master.key = null 11:50:08 kafka | delegation.token.max.lifetime.ms = 604800000 11:50:08 kafka | delegation.token.secret.key = null 11:50:08 kafka | delete.records.purgatory.purge.interval.requests = 1 11:50:08 kafka | delete.topic.enable = true 11:50:08 kafka | early.start.listeners = null 11:50:08 kafka | fetch.max.bytes = 57671680 11:50:08 kafka | fetch.purgatory.purge.interval.requests = 1000 11:50:08 kafka | group.consumer.assignors = [org.apache.kafka.coordinator.group.assignor.RangeAssignor] 11:50:08 kafka | group.consumer.heartbeat.interval.ms = 5000 11:50:08 kafka | group.consumer.max.heartbeat.interval.ms = 15000 11:50:08 kafka | group.consumer.max.session.timeout.ms = 60000 11:50:08 kafka | group.consumer.max.size = 2147483647 11:50:08 kafka | group.consumer.min.heartbeat.interval.ms = 5000 11:50:08 kafka | group.consumer.min.session.timeout.ms = 45000 11:50:08 kafka | group.consumer.session.timeout.ms = 45000 11:50:08 kafka | group.coordinator.new.enable = false 11:50:08 kafka | group.coordinator.threads = 1 11:50:08 kafka | group.initial.rebalance.delay.ms = 3000 11:50:08 kafka | group.max.session.timeout.ms = 1800000 11:50:08 kafka | group.max.size = 2147483647 11:50:08 kafka | group.min.session.timeout.ms = 6000 11:50:08 kafka | initial.broker.registration.timeout.ms = 60000 11:50:08 kafka | inter.broker.listener.name = PLAINTEXT 11:50:08 kafka | inter.broker.protocol.version = 3.6-IV2 11:50:08 kafka | kafka.metrics.polling.interval.secs = 10 11:50:08 kafka | kafka.metrics.reporters = [] 11:50:08 kafka | leader.imbalance.check.interval.seconds = 300 11:50:08 kafka | leader.imbalance.per.broker.percentage = 10 11:50:08 kafka | listener.security.protocol.map = PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT 11:50:08 kafka | listeners = PLAINTEXT://0.0.0.0:9092,PLAINTEXT_HOST://0.0.0.0:29092 11:50:08 kafka | log.cleaner.backoff.ms = 15000 11:50:08 kafka | log.cleaner.dedupe.buffer.size = 134217728 11:50:08 kafka | log.cleaner.delete.retention.ms = 86400000 11:50:08 kafka | log.cleaner.enable = true 11:50:08 kafka | log.cleaner.io.buffer.load.factor = 0.9 11:50:08 kafka | log.cleaner.io.buffer.size = 524288 11:50:08 kafka | log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 11:50:08 kafka | log.cleaner.max.compaction.lag.ms = 9223372036854775807 11:50:08 kafka | log.cleaner.min.cleanable.ratio = 0.5 11:50:08 kafka | log.cleaner.min.compaction.lag.ms = 0 11:50:08 kafka | log.cleaner.threads = 1 11:50:08 kafka | log.cleanup.policy = [delete] 11:50:08 kafka | log.dir = /tmp/kafka-logs 11:50:08 kafka | log.dirs = /var/lib/kafka/data 11:50:08 kafka | log.flush.interval.messages = 9223372036854775807 11:50:08 kafka | log.flush.interval.ms = null 11:50:08 kafka | log.flush.offset.checkpoint.interval.ms = 60000 11:50:08 kafka | log.flush.scheduler.interval.ms = 9223372036854775807 11:50:08 kafka | log.flush.start.offset.checkpoint.interval.ms = 60000 11:50:08 kafka | log.index.interval.bytes = 4096 11:50:08 kafka | log.index.size.max.bytes = 10485760 11:50:08 kafka | log.local.retention.bytes = -2 11:50:08 kafka | log.local.retention.ms = -2 11:50:08 kafka | log.message.downconversion.enable = true 11:50:08 kafka | log.message.format.version = 3.0-IV1 11:50:08 kafka | log.message.timestamp.after.max.ms = 9223372036854775807 11:50:08 kafka | log.message.timestamp.before.max.ms = 9223372036854775807 11:50:08 kafka | log.message.timestamp.difference.max.ms = 9223372036854775807 11:50:08 kafka | log.message.timestamp.type = CreateTime 11:50:08 kafka | log.preallocate = false 11:50:08 kafka | log.retention.bytes = -1 11:50:08 kafka | log.retention.check.interval.ms = 300000 11:50:08 kafka | log.retention.hours = 168 11:50:08 kafka | log.retention.minutes = null 11:50:08 kafka | log.retention.ms = null 11:50:08 kafka | log.roll.hours = 168 11:50:08 kafka | log.roll.jitter.hours = 0 11:50:08 kafka | log.roll.jitter.ms = null 11:50:08 kafka | log.roll.ms = null 11:50:08 kafka | log.segment.bytes = 1073741824 11:50:08 kafka | log.segment.delete.delay.ms = 60000 11:50:08 kafka | max.connection.creation.rate = 2147483647 11:50:08 kafka | max.connections = 2147483647 11:50:08 kafka | max.connections.per.ip = 2147483647 11:50:08 kafka | max.connections.per.ip.overrides = 11:50:08 kafka | max.incremental.fetch.session.cache.slots = 1000 11:50:08 kafka | message.max.bytes = 1048588 11:50:08 kafka | metadata.log.dir = null 11:50:08 kafka | metadata.log.max.record.bytes.between.snapshots = 20971520 11:50:08 kafka | metadata.log.max.snapshot.interval.ms = 3600000 11:50:08 kafka | metadata.log.segment.bytes = 1073741824 11:50:08 kafka | metadata.log.segment.min.bytes = 8388608 11:50:08 kafka | metadata.log.segment.ms = 604800000 11:50:08 kafka | metadata.max.idle.interval.ms = 500 11:50:08 kafka | metadata.max.retention.bytes = 104857600 11:50:08 kafka | metadata.max.retention.ms = 604800000 11:50:08 kafka | metric.reporters = [] 11:50:08 kafka | metrics.num.samples = 2 11:50:08 kafka | metrics.recording.level = INFO 11:50:08 kafka | metrics.sample.window.ms = 30000 11:50:08 kafka | min.insync.replicas = 1 11:50:08 kafka | node.id = 1 11:50:08 kafka | num.io.threads = 8 11:50:08 kafka | num.network.threads = 3 11:50:08 grafana | logger=settings t=2024-02-21T11:47:41.715259508Z level=info msg="Starting Grafana" version=10.3.3 commit=252761264e22ece57204b327f9130d3b44592c01 branch=HEAD compiled=2024-02-21T11:47:41Z 11:50:08 grafana | logger=settings t=2024-02-21T11:47:41.715455Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini 11:50:08 grafana | logger=settings t=2024-02-21T11:47:41.715461131Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini 11:50:08 grafana | logger=settings t=2024-02-21T11:47:41.715464431Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana" 11:50:08 grafana | logger=settings t=2024-02-21T11:47:41.715467481Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana" 11:50:08 grafana | logger=settings t=2024-02-21T11:47:41.715470221Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins" 11:50:08 grafana | logger=settings t=2024-02-21T11:47:41.715472841Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning" 11:50:08 grafana | logger=settings t=2024-02-21T11:47:41.715475741Z level=info msg="Config overridden from command line" arg="default.log.mode=console" 11:50:08 grafana | logger=settings t=2024-02-21T11:47:41.715480051Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana" 11:50:08 grafana | logger=settings t=2024-02-21T11:47:41.715482971Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana" 11:50:08 grafana | logger=settings t=2024-02-21T11:47:41.715486871Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins" 11:50:08 grafana | logger=settings t=2024-02-21T11:47:41.715489731Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning" 11:50:08 grafana | logger=settings t=2024-02-21T11:47:41.715492441Z level=info msg=Target target=[all] 11:50:08 grafana | logger=settings t=2024-02-21T11:47:41.715497141Z level=info msg="Path Home" path=/usr/share/grafana 11:50:08 grafana | logger=settings t=2024-02-21T11:47:41.715500111Z level=info msg="Path Data" path=/var/lib/grafana 11:50:08 grafana | logger=settings t=2024-02-21T11:47:41.715503141Z level=info msg="Path Logs" path=/var/log/grafana 11:50:08 grafana | logger=settings t=2024-02-21T11:47:41.715508741Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins 11:50:08 grafana | logger=settings t=2024-02-21T11:47:41.715511481Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning 11:50:08 grafana | logger=settings t=2024-02-21T11:47:41.715514321Z level=info msg="App mode production" 11:50:08 grafana | logger=sqlstore t=2024-02-21T11:47:41.715818846Z level=info msg="Connecting to DB" dbtype=sqlite3 11:50:08 grafana | logger=sqlstore t=2024-02-21T11:47:41.715832266Z level=info msg="Creating SQLite database file" path=/var/lib/grafana/grafana.db 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:41.716594617Z level=info msg="Starting DB migrations" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:41.717556022Z level=info msg="Executing migration" id="create migration_log table" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:41.718477026Z level=info msg="Migration successfully executed" id="create migration_log table" duration=920.324µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:41.723876548Z level=info msg="Executing migration" id="create user table" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:41.724340475Z level=info msg="Migration successfully executed" id="create user table" duration=463.607µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:41.72733965Z level=info msg="Executing migration" id="add unique index user.login" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:41.727843968Z level=info msg="Migration successfully executed" id="add unique index user.login" duration=503.988µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:41.737481844Z level=info msg="Executing migration" id="add unique index user.email" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:41.738528899Z level=info msg="Migration successfully executed" id="add unique index user.email" duration=1.046705ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:41.743830979Z level=info msg="Executing migration" id="drop index UQE_user_login - v1" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:41.744818244Z level=info msg="Migration successfully executed" id="drop index UQE_user_login - v1" duration=987.355µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:41.748460829Z level=info msg="Executing migration" id="drop index UQE_user_email - v1" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:41.749099039Z level=info msg="Migration successfully executed" id="drop index UQE_user_email - v1" duration=639.2µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:41.756390479Z level=info msg="Executing migration" id="Rename table user to user_v1 - v1" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:41.760571322Z level=info msg="Migration successfully executed" id="Rename table user to user_v1 - v1" duration=4.179563ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:41.765126781Z level=info msg="Executing migration" id="create user table v2" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:41.765956594Z level=info msg="Migration successfully executed" id="create user table v2" duration=831.093µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:41.770256928Z level=info msg="Executing migration" id="create index UQE_user_login - v2" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:41.771112722Z level=info msg="Migration successfully executed" id="create index UQE_user_login - v2" duration=855.744µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:41.779662931Z level=info msg="Executing migration" id="create index UQE_user_email - v2" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:41.78091462Z level=info msg="Migration successfully executed" id="create index UQE_user_email - v2" duration=1.251589ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:41.785158394Z level=info msg="Executing migration" id="copy data_source v1 to v2" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:41.785791133Z level=info msg="Migration successfully executed" id="copy data_source v1 to v2" duration=631.859µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:41.789947236Z level=info msg="Executing migration" id="Drop old table user_v1" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:41.791157075Z level=info msg="Migration successfully executed" id="Drop old table user_v1" duration=1.212359ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:41.795673582Z level=info msg="Executing migration" id="Add column help_flags1 to user table" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:41.797543101Z level=info msg="Migration successfully executed" id="Add column help_flags1 to user table" duration=1.868999ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:41.801830356Z level=info msg="Executing migration" id="Update user table charset" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:41.801858077Z level=info msg="Migration successfully executed" id="Update user table charset" duration=28.611µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:41.805804816Z level=info msg="Executing migration" id="Add last_seen_at column to user" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:41.807134525Z level=info msg="Migration successfully executed" id="Add last_seen_at column to user" duration=1.329149ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:41.813172878Z level=info msg="Executing migration" id="Add missing user data" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:41.813793757Z level=info msg="Migration successfully executed" id="Add missing user data" duration=619.77µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:41.820384966Z level=info msg="Executing migration" id="Add is_disabled column to user" 11:50:08 policy-apex-pdp | Waiting for mariadb port 3306... 11:50:08 policy-apex-pdp | mariadb (172.17.0.3:3306) open 11:50:08 policy-apex-pdp | Waiting for kafka port 9092... 11:50:08 policy-apex-pdp | Waiting for pap port 6969... 11:50:08 policy-apex-pdp | kafka (172.17.0.7:9092) open 11:50:08 policy-apex-pdp | pap (172.17.0.9:6969) open 11:50:08 policy-apex-pdp | apexApps.sh: running application 'onappf' with command 'java -Dlogback.configurationFile=/opt/app/policy/apex-pdp/etc/logback.xml -cp /opt/app/policy/apex-pdp/etc:/opt/app/policy/apex-pdp/etc/hazelcast:/opt/app/policy/apex-pdp/etc/infinispan:/opt/app/policy/apex-pdp/lib/* -Djavax.net.ssl.keyStore=/opt/app/policy/apex-pdp/etc/ssl/policy-keystore -Djavax.net.ssl.keyStorePassword=Pol1cy_0nap -Djavax.net.ssl.trustStore=/opt/app/policy/apex-pdp/etc/ssl/policy-truststore -Djavax.net.ssl.trustStorePassword=Pol1cy_0nap -Dlogback.configurationFile=/opt/app/policy/apex-pdp/etc/logback.xml -Dhazelcast.config=/opt/app/policy/apex-pdp/etc/hazelcast.xml -Dhazelcast.mancenter.enabled=false org.onap.policy.apex.services.onappf.ApexStarterMain -c /opt/app/policy/apex-pdp/etc/onappf/config/OnapPfConfig.json' 11:50:08 policy-apex-pdp | [2024-02-21T11:48:12.372+00:00|INFO|ApexStarterMain|main] In ApexStarter with parameters [-c, /opt/app/policy/apex-pdp/etc/onappf/config/OnapPfConfig.json] 11:50:08 policy-apex-pdp | [2024-02-21T11:48:12.555+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 11:50:08 policy-apex-pdp | allow.auto.create.topics = true 11:50:08 policy-apex-pdp | auto.commit.interval.ms = 5000 11:50:08 policy-apex-pdp | auto.include.jmx.reporter = true 11:50:08 policy-apex-pdp | auto.offset.reset = latest 11:50:08 policy-apex-pdp | bootstrap.servers = [kafka:9092] 11:50:08 policy-apex-pdp | check.crcs = true 11:50:08 policy-apex-pdp | client.dns.lookup = use_all_dns_ips 11:50:08 policy-apex-pdp | client.id = consumer-4c4f3bdc-0a77-42e6-89df-d332cf428198-1 11:50:08 policy-apex-pdp | client.rack = 11:50:08 policy-apex-pdp | connections.max.idle.ms = 540000 11:50:08 policy-apex-pdp | default.api.timeout.ms = 60000 11:50:08 policy-apex-pdp | enable.auto.commit = true 11:50:08 policy-apex-pdp | exclude.internal.topics = true 11:50:08 policy-apex-pdp | fetch.max.bytes = 52428800 11:50:08 policy-apex-pdp | fetch.max.wait.ms = 500 11:50:08 policy-apex-pdp | fetch.min.bytes = 1 11:50:08 policy-apex-pdp | group.id = 4c4f3bdc-0a77-42e6-89df-d332cf428198 11:50:08 policy-apex-pdp | group.instance.id = null 11:50:08 policy-apex-pdp | heartbeat.interval.ms = 3000 11:50:08 policy-apex-pdp | interceptor.classes = [] 11:50:08 policy-apex-pdp | internal.leave.group.on.close = true 11:50:08 policy-apex-pdp | internal.throw.on.fetch.stable.offset.unsupported = false 11:50:08 policy-apex-pdp | isolation.level = read_uncommitted 11:50:08 policy-apex-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 11:50:08 policy-apex-pdp | max.partition.fetch.bytes = 1048576 11:50:08 policy-apex-pdp | max.poll.interval.ms = 300000 11:50:08 policy-apex-pdp | max.poll.records = 500 11:50:08 policy-apex-pdp | metadata.max.age.ms = 300000 11:50:08 policy-apex-pdp | metric.reporters = [] 11:50:08 policy-apex-pdp | metrics.num.samples = 2 11:50:08 policy-apex-pdp | metrics.recording.level = INFO 11:50:08 policy-apex-pdp | metrics.sample.window.ms = 30000 11:50:08 policy-apex-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 11:50:08 policy-apex-pdp | receive.buffer.bytes = 65536 11:50:08 policy-apex-pdp | reconnect.backoff.max.ms = 1000 11:50:08 policy-apex-pdp | reconnect.backoff.ms = 50 11:50:08 policy-apex-pdp | request.timeout.ms = 30000 11:50:08 policy-apex-pdp | retry.backoff.ms = 100 11:50:08 policy-apex-pdp | sasl.client.callback.handler.class = null 11:50:08 policy-apex-pdp | sasl.jaas.config = null 11:50:08 policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit 11:50:08 policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 11:50:08 policy-apex-pdp | sasl.kerberos.service.name = null 11:50:08 policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 11:50:08 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 11:50:08 policy-apex-pdp | sasl.login.callback.handler.class = null 11:50:08 policy-apex-pdp | sasl.login.class = null 11:50:08 policy-apex-pdp | sasl.login.connect.timeout.ms = null 11:50:08 policy-apex-pdp | sasl.login.read.timeout.ms = null 11:50:08 policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 11:50:08 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 11:50:08 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 11:50:08 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 11:50:08 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 11:50:08 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 11:50:08 policy-apex-pdp | sasl.mechanism = GSSAPI 11:50:08 policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 11:50:08 policy-apex-pdp | sasl.oauthbearer.expected.audience = null 11:50:08 policy-apex-pdp | sasl.oauthbearer.expected.issuer = null 11:50:08 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 11:50:08 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 11:50:08 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 11:50:08 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null 11:50:08 policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope 11:50:08 policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub 11:50:08 policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null 11:50:08 policy-apex-pdp | security.protocol = PLAINTEXT 11:50:08 policy-apex-pdp | security.providers = null 11:50:08 policy-apex-pdp | send.buffer.bytes = 131072 11:50:08 policy-apex-pdp | session.timeout.ms = 45000 11:50:08 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 11:50:08 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 11:50:08 policy-apex-pdp | ssl.cipher.suites = null 11:50:08 policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 11:50:08 policy-apex-pdp | ssl.endpoint.identification.algorithm = https 11:50:08 policy-apex-pdp | ssl.engine.factory.class = null 11:50:08 policy-apex-pdp | ssl.key.password = null 11:50:08 policy-apex-pdp | ssl.keymanager.algorithm = SunX509 11:50:08 policy-apex-pdp | ssl.keystore.certificate.chain = null 11:50:08 policy-apex-pdp | ssl.keystore.key = null 11:50:08 policy-apex-pdp | ssl.keystore.location = null 11:50:08 policy-apex-pdp | ssl.keystore.password = null 11:50:08 policy-apex-pdp | ssl.keystore.type = JKS 11:50:08 policy-apex-pdp | ssl.protocol = TLSv1.3 11:50:08 policy-apex-pdp | ssl.provider = null 11:50:08 policy-apex-pdp | ssl.secure.random.implementation = null 11:50:08 policy-apex-pdp | ssl.trustmanager.algorithm = PKIX 11:50:08 policy-apex-pdp | ssl.truststore.certificates = null 11:50:08 policy-apex-pdp | ssl.truststore.location = null 11:50:08 policy-apex-pdp | ssl.truststore.password = null 11:50:08 policy-apex-pdp | ssl.truststore.type = JKS 11:50:08 policy-apex-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 11:50:08 policy-apex-pdp | 11:50:08 policy-apex-pdp | [2024-02-21T11:48:12.719+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 11:50:08 policy-apex-pdp | [2024-02-21T11:48:12.719+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 11:50:08 policy-apex-pdp | [2024-02-21T11:48:12.719+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1708516092717 11:50:08 policy-apex-pdp | [2024-02-21T11:48:12.721+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-4c4f3bdc-0a77-42e6-89df-d332cf428198-1, groupId=4c4f3bdc-0a77-42e6-89df-d332cf428198] Subscribed to topic(s): policy-pdp-pap 11:50:08 policy-apex-pdp | [2024-02-21T11:48:12.733+00:00|INFO|ServiceManager|main] service manager starting 11:50:08 policy-apex-pdp | [2024-02-21T11:48:12.733+00:00|INFO|ServiceManager|main] service manager starting topics 11:50:08 policy-apex-pdp | [2024-02-21T11:48:12.736+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=4c4f3bdc-0a77-42e6-89df-d332cf428198, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: starting 11:50:08 policy-apex-pdp | [2024-02-21T11:48:12.755+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 11:50:08 policy-apex-pdp | allow.auto.create.topics = true 11:50:08 policy-apex-pdp | auto.commit.interval.ms = 5000 11:50:08 policy-apex-pdp | auto.include.jmx.reporter = true 11:50:08 policy-apex-pdp | auto.offset.reset = latest 11:50:08 policy-apex-pdp | bootstrap.servers = [kafka:9092] 11:50:08 policy-apex-pdp | check.crcs = true 11:50:08 policy-apex-pdp | client.dns.lookup = use_all_dns_ips 11:50:08 policy-apex-pdp | client.id = consumer-4c4f3bdc-0a77-42e6-89df-d332cf428198-2 11:50:08 policy-apex-pdp | client.rack = 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:41.822380976Z level=info msg="Migration successfully executed" id="Add is_disabled column to user" duration=1.99491ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:41.826125063Z level=info msg="Executing migration" id="Add index user.login/user.email" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:41.827117648Z level=info msg="Migration successfully executed" id="Add index user.login/user.email" duration=989.166µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:41.835658877Z level=info msg="Executing migration" id="Add is_service_account column to user" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:41.837666487Z level=info msg="Migration successfully executed" id="Add is_service_account column to user" duration=2.00657ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:41.841761399Z level=info msg="Executing migration" id="Update is_service_account column to nullable" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:41.854196667Z level=info msg="Migration successfully executed" id="Update is_service_account column to nullable" duration=12.435968ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:41.858475092Z level=info msg="Executing migration" id="create temp user table v1-7" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:41.859108491Z level=info msg="Migration successfully executed" id="create temp user table v1-7" duration=633.069µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:41.86234071Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v1-7" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:41.862973129Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v1-7" duration=632.179µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:41.869032932Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v1-7" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:41.870117108Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v1-7" duration=1.081286ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:41.874511025Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v1-7" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:41.875318086Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v1-7" duration=807.871µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:41.879246985Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v1-7" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:41.879997266Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v1-7" duration=750.051µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:41.887301077Z level=info msg="Executing migration" id="Update temp_user table charset" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:41.887348648Z level=info msg="Migration successfully executed" id="Update temp_user table charset" duration=48.861µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:41.937530386Z level=info msg="Executing migration" id="drop index IDX_temp_user_email - v1" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:41.93907202Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_email - v1" duration=1.588024ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:41.944631194Z level=info msg="Executing migration" id="drop index IDX_temp_user_org_id - v1" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:41.945914113Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_org_id - v1" duration=1.282539ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:41.950429982Z level=info msg="Executing migration" id="drop index IDX_temp_user_code - v1" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:41.951475847Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_code - v1" duration=1.049255ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:41.956516734Z level=info msg="Executing migration" id="drop index IDX_temp_user_status - v1" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:41.95759879Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_status - v1" duration=1.080716ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:41.961792204Z level=info msg="Executing migration" id="Rename table temp_user to temp_user_tmp_qwerty - v1" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:41.967984267Z level=info msg="Migration successfully executed" id="Rename table temp_user to temp_user_tmp_qwerty - v1" duration=6.193293ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:41.97280376Z level=info msg="Executing migration" id="create temp_user v2" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:41.973643222Z level=info msg="Migration successfully executed" id="create temp_user v2" duration=839.522µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:41.978822951Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v2" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:41.979647023Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v2" duration=824.022µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:41.984449076Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v2" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:41.985777436Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v2" duration=1.324919ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:41.989694965Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v2" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:41.991039705Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v2" duration=1.3451ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:41.99472884Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v2" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:41.995583874Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v2" duration=852.614µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.001793988Z level=info msg="Executing migration" id="copy temp_user v1 to v2" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.002291195Z level=info msg="Migration successfully executed" id="copy temp_user v1 to v2" duration=497.107µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.005611709Z level=info msg="Executing migration" id="drop temp_user_tmp_qwerty" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.006212264Z level=info msg="Migration successfully executed" id="drop temp_user_tmp_qwerty" duration=599.995µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.011145489Z level=info msg="Executing migration" id="Set created for temp users that will otherwise prematurely expire" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.011830678Z level=info msg="Migration successfully executed" id="Set created for temp users that will otherwise prematurely expire" duration=685.009µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.017113929Z level=info msg="Executing migration" id="create star table" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.018231991Z level=info msg="Migration successfully executed" id="create star table" duration=1.118322ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.024185422Z level=info msg="Executing migration" id="add unique index star.user_id_dashboard_id" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.025561315Z level=info msg="Migration successfully executed" id="add unique index star.user_id_dashboard_id" duration=1.376363ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.030276753Z level=info msg="Executing migration" id="create org table v1" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.031811689Z level=info msg="Migration successfully executed" id="create org table v1" duration=1.532236ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.037273395Z level=info msg="Executing migration" id="create index UQE_org_name - v1" 11:50:08 policy-apex-pdp | connections.max.idle.ms = 540000 11:50:08 policy-apex-pdp | default.api.timeout.ms = 60000 11:50:08 policy-apex-pdp | enable.auto.commit = true 11:50:08 policy-apex-pdp | exclude.internal.topics = true 11:50:08 policy-apex-pdp | fetch.max.bytes = 52428800 11:50:08 policy-apex-pdp | fetch.max.wait.ms = 500 11:50:08 policy-apex-pdp | fetch.min.bytes = 1 11:50:08 policy-apex-pdp | group.id = 4c4f3bdc-0a77-42e6-89df-d332cf428198 11:50:08 policy-apex-pdp | group.instance.id = null 11:50:08 policy-apex-pdp | heartbeat.interval.ms = 3000 11:50:08 policy-apex-pdp | interceptor.classes = [] 11:50:08 policy-apex-pdp | internal.leave.group.on.close = true 11:50:08 policy-apex-pdp | internal.throw.on.fetch.stable.offset.unsupported = false 11:50:08 policy-apex-pdp | isolation.level = read_uncommitted 11:50:08 policy-apex-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 11:50:08 policy-apex-pdp | max.partition.fetch.bytes = 1048576 11:50:08 policy-apex-pdp | max.poll.interval.ms = 300000 11:50:08 policy-apex-pdp | max.poll.records = 500 11:50:08 policy-apex-pdp | metadata.max.age.ms = 300000 11:50:08 policy-apex-pdp | metric.reporters = [] 11:50:08 policy-apex-pdp | metrics.num.samples = 2 11:50:08 policy-apex-pdp | metrics.recording.level = INFO 11:50:08 policy-apex-pdp | metrics.sample.window.ms = 30000 11:50:08 policy-apex-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 11:50:08 policy-apex-pdp | receive.buffer.bytes = 65536 11:50:08 policy-apex-pdp | reconnect.backoff.max.ms = 1000 11:50:08 policy-apex-pdp | reconnect.backoff.ms = 50 11:50:08 policy-apex-pdp | request.timeout.ms = 30000 11:50:08 policy-apex-pdp | retry.backoff.ms = 100 11:50:08 policy-apex-pdp | sasl.client.callback.handler.class = null 11:50:08 policy-apex-pdp | sasl.jaas.config = null 11:50:08 policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit 11:50:08 policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 11:50:08 policy-apex-pdp | sasl.kerberos.service.name = null 11:50:08 policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 11:50:08 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 11:50:08 policy-apex-pdp | sasl.login.callback.handler.class = null 11:50:08 policy-apex-pdp | sasl.login.class = null 11:50:08 policy-apex-pdp | sasl.login.connect.timeout.ms = null 11:50:08 policy-apex-pdp | sasl.login.read.timeout.ms = null 11:50:08 policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 11:50:08 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 11:50:08 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 11:50:08 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 11:50:08 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 11:50:08 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 11:50:08 policy-apex-pdp | sasl.mechanism = GSSAPI 11:50:08 policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 11:50:08 policy-apex-pdp | sasl.oauthbearer.expected.audience = null 11:50:08 policy-apex-pdp | sasl.oauthbearer.expected.issuer = null 11:50:08 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 11:50:08 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 11:50:08 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 11:50:08 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null 11:50:08 policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope 11:50:08 policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub 11:50:08 policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null 11:50:08 policy-apex-pdp | security.protocol = PLAINTEXT 11:50:08 policy-apex-pdp | security.providers = null 11:50:08 policy-apex-pdp | send.buffer.bytes = 131072 11:50:08 policy-apex-pdp | session.timeout.ms = 45000 11:50:08 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 11:50:08 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 11:50:08 policy-apex-pdp | ssl.cipher.suites = null 11:50:08 policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 11:50:08 policy-apex-pdp | ssl.endpoint.identification.algorithm = https 11:50:08 policy-apex-pdp | ssl.engine.factory.class = null 11:50:08 policy-apex-pdp | ssl.key.password = null 11:50:08 policy-apex-pdp | ssl.keymanager.algorithm = SunX509 11:50:08 policy-apex-pdp | ssl.keystore.certificate.chain = null 11:50:08 policy-apex-pdp | ssl.keystore.key = null 11:50:08 policy-apex-pdp | ssl.keystore.location = null 11:50:08 policy-apex-pdp | ssl.keystore.password = null 11:50:08 policy-apex-pdp | ssl.keystore.type = JKS 11:50:08 policy-apex-pdp | ssl.protocol = TLSv1.3 11:50:08 policy-apex-pdp | ssl.provider = null 11:50:08 policy-apex-pdp | ssl.secure.random.implementation = null 11:50:08 policy-apex-pdp | ssl.trustmanager.algorithm = PKIX 11:50:08 policy-apex-pdp | ssl.truststore.certificates = null 11:50:08 policy-apex-pdp | ssl.truststore.location = null 11:50:08 policy-apex-pdp | ssl.truststore.password = null 11:50:08 policy-apex-pdp | ssl.truststore.type = JKS 11:50:08 policy-apex-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 11:50:08 policy-apex-pdp | 11:50:08 policy-apex-pdp | [2024-02-21T11:48:12.763+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 11:50:08 policy-apex-pdp | [2024-02-21T11:48:12.763+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 11:50:08 policy-apex-pdp | [2024-02-21T11:48:12.763+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1708516092763 11:50:08 policy-apex-pdp | [2024-02-21T11:48:12.764+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-4c4f3bdc-0a77-42e6-89df-d332cf428198-2, groupId=4c4f3bdc-0a77-42e6-89df-d332cf428198] Subscribed to topic(s): policy-pdp-pap 11:50:08 policy-apex-pdp | [2024-02-21T11:48:12.764+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=2c65fed3-f5ef-460c-8a90-f0af658bc04c, alive=false, publisher=null]]: starting 11:50:08 policy-apex-pdp | [2024-02-21T11:48:12.775+00:00|INFO|ProducerConfig|main] ProducerConfig values: 11:50:08 policy-apex-pdp | acks = -1 11:50:08 policy-apex-pdp | auto.include.jmx.reporter = true 11:50:08 policy-apex-pdp | batch.size = 16384 11:50:08 policy-apex-pdp | bootstrap.servers = [kafka:9092] 11:50:08 policy-apex-pdp | buffer.memory = 33554432 11:50:08 policy-apex-pdp | client.dns.lookup = use_all_dns_ips 11:50:08 policy-apex-pdp | client.id = producer-1 11:50:08 policy-apex-pdp | compression.type = none 11:50:08 policy-apex-pdp | connections.max.idle.ms = 540000 11:50:08 policy-apex-pdp | delivery.timeout.ms = 120000 11:50:08 policy-apex-pdp | enable.idempotence = true 11:50:08 policy-apex-pdp | interceptor.classes = [] 11:50:08 policy-apex-pdp | key.serializer = class org.apache.kafka.common.serialization.StringSerializer 11:50:08 policy-apex-pdp | linger.ms = 0 11:50:08 policy-apex-pdp | max.block.ms = 60000 11:50:08 policy-apex-pdp | max.in.flight.requests.per.connection = 5 11:50:08 policy-apex-pdp | max.request.size = 1048576 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.038133233Z level=info msg="Migration successfully executed" id="create index UQE_org_name - v1" duration=860.138µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.044671899Z level=info msg="Executing migration" id="create org_user table v1" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.045840432Z level=info msg="Migration successfully executed" id="create org_user table v1" duration=1.168413ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.050443128Z level=info msg="Executing migration" id="create index IDX_org_user_org_id - v1" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.051674311Z level=info msg="Migration successfully executed" id="create index IDX_org_user_org_id - v1" duration=1.231043ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.056970594Z level=info msg="Executing migration" id="create index UQE_org_user_org_id_user_id - v1" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.058247607Z level=info msg="Migration successfully executed" id="create index UQE_org_user_org_id_user_id - v1" duration=1.276233ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.063377429Z level=info msg="Executing migration" id="create index IDX_org_user_user_id - v1" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.064170007Z level=info msg="Migration successfully executed" id="create index IDX_org_user_user_id - v1" duration=794.828µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.070033026Z level=info msg="Executing migration" id="Update org table charset" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.070182138Z level=info msg="Migration successfully executed" id="Update org table charset" duration=148.982µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.074651123Z level=info msg="Executing migration" id="Update org_user table charset" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.074843705Z level=info msg="Migration successfully executed" id="Update org_user table charset" duration=192.122µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.079555373Z level=info msg="Executing migration" id="Migrate all Read Only Viewers to Viewers" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.079923607Z level=info msg="Migration successfully executed" id="Migrate all Read Only Viewers to Viewers" duration=368.424µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.084345071Z level=info msg="Executing migration" id="create dashboard table" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.085491733Z level=info msg="Migration successfully executed" id="create dashboard table" duration=1.146362ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.091790337Z level=info msg="Executing migration" id="add index dashboard.account_id" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.093095431Z level=info msg="Migration successfully executed" id="add index dashboard.account_id" duration=1.304913ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.099356504Z level=info msg="Executing migration" id="add unique index dashboard_account_id_slug" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.100207352Z level=info msg="Migration successfully executed" id="add unique index dashboard_account_id_slug" duration=850.378µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.104545086Z level=info msg="Executing migration" id="create dashboard_tag table" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.10594097Z level=info msg="Migration successfully executed" id="create dashboard_tag table" duration=1.395024ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.110116433Z level=info msg="Executing migration" id="add unique index dashboard_tag.dasboard_id_term" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.1118646Z level=info msg="Migration successfully executed" id="add unique index dashboard_tag.dasboard_id_term" duration=1.749037ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.116664479Z level=info msg="Executing migration" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.117430006Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" duration=811.788µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.121482908Z level=info msg="Executing migration" id="Rename table dashboard to dashboard_v1 - v1" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.127897753Z level=info msg="Migration successfully executed" id="Rename table dashboard to dashboard_v1 - v1" duration=6.414385ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.135332928Z level=info msg="Executing migration" id="create dashboard v2" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.136248367Z level=info msg="Migration successfully executed" id="create dashboard v2" duration=914.049µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.141501131Z level=info msg="Executing migration" id="create index IDX_dashboard_org_id - v2" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.143128257Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_org_id - v2" duration=1.625936ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.146928346Z level=info msg="Executing migration" id="create index UQE_dashboard_org_id_slug - v2" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.147780124Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_org_id_slug - v2" duration=851.498µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.151906446Z level=info msg="Executing migration" id="copy dashboard v1 to v2" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.152391051Z level=info msg="Migration successfully executed" id="copy dashboard v1 to v2" duration=484.005µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.156794035Z level=info msg="Executing migration" id="drop table dashboard_v1" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.157633684Z level=info msg="Migration successfully executed" id="drop table dashboard_v1" duration=838.769µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.161856717Z level=info msg="Executing migration" id="alter dashboard.data to mediumtext v1" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.162100349Z level=info msg="Migration successfully executed" id="alter dashboard.data to mediumtext v1" duration=250.792µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.166589445Z level=info msg="Executing migration" id="Add column updated_by in dashboard - v2" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.169477714Z level=info msg="Migration successfully executed" id="Add column updated_by in dashboard - v2" duration=2.887539ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.174390644Z level=info msg="Executing migration" id="Add column created_by in dashboard - v2" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.176182592Z level=info msg="Migration successfully executed" id="Add column created_by in dashboard - v2" duration=1.791458ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.181634147Z level=info msg="Executing migration" id="Add column gnetId in dashboard" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.184465396Z level=info msg="Migration successfully executed" id="Add column gnetId in dashboard" duration=2.831159ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.189181454Z level=info msg="Executing migration" id="Add index for gnetId in dashboard" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.190065952Z level=info msg="Migration successfully executed" id="Add index for gnetId in dashboard" duration=884.328µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.197061964Z level=info msg="Executing migration" id="Add column plugin_id in dashboard" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.200267756Z level=info msg="Migration successfully executed" id="Add column plugin_id in dashboard" duration=3.206792ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.206156846Z level=info msg="Executing migration" id="Add index for plugin_id in dashboard" 11:50:08 policy-apex-pdp | metadata.max.age.ms = 300000 11:50:08 policy-apex-pdp | metadata.max.idle.ms = 300000 11:50:08 policy-apex-pdp | metric.reporters = [] 11:50:08 policy-apex-pdp | metrics.num.samples = 2 11:50:08 policy-apex-pdp | metrics.recording.level = INFO 11:50:08 policy-apex-pdp | metrics.sample.window.ms = 30000 11:50:08 policy-apex-pdp | partitioner.adaptive.partitioning.enable = true 11:50:08 policy-apex-pdp | partitioner.availability.timeout.ms = 0 11:50:08 policy-apex-pdp | partitioner.class = null 11:50:08 policy-apex-pdp | partitioner.ignore.keys = false 11:50:08 policy-apex-pdp | receive.buffer.bytes = 32768 11:50:08 policy-apex-pdp | reconnect.backoff.max.ms = 1000 11:50:08 policy-apex-pdp | reconnect.backoff.ms = 50 11:50:08 policy-apex-pdp | request.timeout.ms = 30000 11:50:08 policy-apex-pdp | retries = 2147483647 11:50:08 policy-apex-pdp | retry.backoff.ms = 100 11:50:08 policy-apex-pdp | sasl.client.callback.handler.class = null 11:50:08 policy-apex-pdp | sasl.jaas.config = null 11:50:08 policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit 11:50:08 policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 11:50:08 policy-apex-pdp | sasl.kerberos.service.name = null 11:50:08 policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 11:50:08 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 11:50:08 policy-apex-pdp | sasl.login.callback.handler.class = null 11:50:08 policy-apex-pdp | sasl.login.class = null 11:50:08 policy-apex-pdp | sasl.login.connect.timeout.ms = null 11:50:08 policy-apex-pdp | sasl.login.read.timeout.ms = null 11:50:08 policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 11:50:08 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 11:50:08 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 11:50:08 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 11:50:08 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 11:50:08 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 11:50:08 policy-apex-pdp | sasl.mechanism = GSSAPI 11:50:08 policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 11:50:08 policy-apex-pdp | sasl.oauthbearer.expected.audience = null 11:50:08 policy-apex-pdp | sasl.oauthbearer.expected.issuer = null 11:50:08 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 11:50:08 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 11:50:08 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 11:50:08 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null 11:50:08 policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope 11:50:08 policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub 11:50:08 policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null 11:50:08 kafka | num.partitions = 1 11:50:08 kafka | num.recovery.threads.per.data.dir = 1 11:50:08 kafka | num.replica.alter.log.dirs.threads = null 11:50:08 kafka | num.replica.fetchers = 1 11:50:08 kafka | offset.metadata.max.bytes = 4096 11:50:08 kafka | offsets.commit.required.acks = -1 11:50:08 kafka | offsets.commit.timeout.ms = 5000 11:50:08 kafka | offsets.load.buffer.size = 5242880 11:50:08 kafka | offsets.retention.check.interval.ms = 600000 11:50:08 kafka | offsets.retention.minutes = 10080 11:50:08 kafka | offsets.topic.compression.codec = 0 11:50:08 kafka | offsets.topic.num.partitions = 50 11:50:08 kafka | offsets.topic.replication.factor = 1 11:50:08 kafka | offsets.topic.segment.bytes = 104857600 11:50:08 kafka | password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding 11:50:08 kafka | password.encoder.iterations = 4096 11:50:08 kafka | password.encoder.key.length = 128 11:50:08 kafka | password.encoder.keyfactory.algorithm = null 11:50:08 kafka | password.encoder.old.secret = null 11:50:08 kafka | password.encoder.secret = null 11:50:08 kafka | principal.builder.class = class org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder 11:50:08 kafka | process.roles = [] 11:50:08 kafka | producer.id.expiration.check.interval.ms = 600000 11:50:08 kafka | producer.id.expiration.ms = 86400000 11:50:08 kafka | producer.purgatory.purge.interval.requests = 1000 11:50:08 kafka | queued.max.request.bytes = -1 11:50:08 kafka | queued.max.requests = 500 11:50:08 kafka | quota.window.num = 11 11:50:08 kafka | quota.window.size.seconds = 1 11:50:08 kafka | remote.log.index.file.cache.total.size.bytes = 1073741824 11:50:08 kafka | remote.log.manager.task.interval.ms = 30000 11:50:08 kafka | remote.log.manager.task.retry.backoff.max.ms = 30000 11:50:08 kafka | remote.log.manager.task.retry.backoff.ms = 500 11:50:08 kafka | remote.log.manager.task.retry.jitter = 0.2 11:50:08 kafka | remote.log.manager.thread.pool.size = 10 11:50:08 kafka | remote.log.metadata.custom.metadata.max.bytes = 128 11:50:08 kafka | remote.log.metadata.manager.class.name = org.apache.kafka.server.log.remote.metadata.storage.TopicBasedRemoteLogMetadataManager 11:50:08 kafka | remote.log.metadata.manager.class.path = null 11:50:08 kafka | remote.log.metadata.manager.impl.prefix = rlmm.config. 11:50:08 kafka | remote.log.metadata.manager.listener.name = null 11:50:08 kafka | remote.log.reader.max.pending.tasks = 100 11:50:08 kafka | remote.log.reader.threads = 10 11:50:08 kafka | remote.log.storage.manager.class.name = null 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.207033594Z level=info msg="Migration successfully executed" id="Add index for plugin_id in dashboard" duration=876.868µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.212472899Z level=info msg="Executing migration" id="Add index for dashboard_id in dashboard_tag" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.214096786Z level=info msg="Migration successfully executed" id="Add index for dashboard_id in dashboard_tag" duration=1.626097ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.221255508Z level=info msg="Executing migration" id="Update dashboard table charset" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.221330969Z level=info msg="Migration successfully executed" id="Update dashboard table charset" duration=75.011µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.224986647Z level=info msg="Executing migration" id="Update dashboard_tag table charset" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.225017057Z level=info msg="Migration successfully executed" id="Update dashboard_tag table charset" duration=30.17µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.229108228Z level=info msg="Executing migration" id="Add column folder_id in dashboard" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.232172489Z level=info msg="Migration successfully executed" id="Add column folder_id in dashboard" duration=3.062321ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.237704895Z level=info msg="Executing migration" id="Add column isFolder in dashboard" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.240054609Z level=info msg="Migration successfully executed" id="Add column isFolder in dashboard" duration=2.348394ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.243247621Z level=info msg="Executing migration" id="Add column has_acl in dashboard" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.245698456Z level=info msg="Migration successfully executed" id="Add column has_acl in dashboard" duration=2.447525ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.248855428Z level=info msg="Executing migration" id="Add column uid in dashboard" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.250893549Z level=info msg="Migration successfully executed" id="Add column uid in dashboard" duration=2.038091ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.256795099Z level=info msg="Executing migration" id="Update uid column values in dashboard" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.257099912Z level=info msg="Migration successfully executed" id="Update uid column values in dashboard" duration=304.333µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.263686118Z level=info msg="Executing migration" id="Add unique index dashboard_org_id_uid" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.264618528Z level=info msg="Migration successfully executed" id="Add unique index dashboard_org_id_uid" duration=932.22µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.268515577Z level=info msg="Executing migration" id="Remove unique index org_id_slug" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.269378936Z level=info msg="Migration successfully executed" id="Remove unique index org_id_slug" duration=863.279µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.275280586Z level=info msg="Executing migration" id="Update dashboard title length" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.275540899Z level=info msg="Migration successfully executed" id="Update dashboard title length" duration=260.843µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.280004964Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_title_folder_id" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.28155861Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_title_folder_id" duration=1.554116ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.285359698Z level=info msg="Executing migration" id="create dashboard_provisioning" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.286068975Z level=info msg="Migration successfully executed" id="create dashboard_provisioning" duration=708.747µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.292182768Z level=info msg="Executing migration" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.303422181Z level=info msg="Migration successfully executed" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" duration=11.244773ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.306376051Z level=info msg="Executing migration" id="create dashboard_provisioning v2" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.30721278Z level=info msg="Migration successfully executed" id="create dashboard_provisioning v2" duration=830.348µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.310795446Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id - v2" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.311772056Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id - v2" duration=976.02µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.316092569Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.317132311Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" duration=1.036671ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.323700027Z level=info msg="Executing migration" id="copy dashboard_provisioning v1 to v2" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.3240847Z level=info msg="Migration successfully executed" id="copy dashboard_provisioning v1 to v2" duration=384.203µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.32802597Z level=info msg="Executing migration" id="drop dashboard_provisioning_tmp_qwerty" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.329105582Z level=info msg="Migration successfully executed" id="drop dashboard_provisioning_tmp_qwerty" duration=1.078661ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.334411855Z level=info msg="Executing migration" id="Add check_sum column" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.338588627Z level=info msg="Migration successfully executed" id="Add check_sum column" duration=4.104301ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.345156114Z level=info msg="Executing migration" id="Add index for dashboard_title" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.346131014Z level=info msg="Migration successfully executed" id="Add index for dashboard_title" duration=974.75µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.349959852Z level=info msg="Executing migration" id="delete tags for deleted dashboards" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.350351506Z level=info msg="Migration successfully executed" id="delete tags for deleted dashboards" duration=391.194µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.353836782Z level=info msg="Executing migration" id="delete stars for deleted dashboards" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.354062654Z level=info msg="Migration successfully executed" id="delete stars for deleted dashboards" duration=225.692µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.359800223Z level=info msg="Executing migration" id="Add index for dashboard_is_folder" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.36060019Z level=info msg="Migration successfully executed" id="Add index for dashboard_is_folder" duration=799.857µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.363793033Z level=info msg="Executing migration" id="Add isPublic for dashboard" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.366243068Z level=info msg="Migration successfully executed" id="Add isPublic for dashboard" duration=2.444034ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.373497451Z level=info msg="Executing migration" id="create data_source table" 11:50:08 policy-apex-pdp | security.protocol = PLAINTEXT 11:50:08 policy-apex-pdp | security.providers = null 11:50:08 policy-apex-pdp | send.buffer.bytes = 131072 11:50:08 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 11:50:08 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 11:50:08 policy-apex-pdp | ssl.cipher.suites = null 11:50:08 policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 11:50:08 policy-apex-pdp | ssl.endpoint.identification.algorithm = https 11:50:08 policy-apex-pdp | ssl.engine.factory.class = null 11:50:08 policy-apex-pdp | ssl.key.password = null 11:50:08 policy-apex-pdp | ssl.keymanager.algorithm = SunX509 11:50:08 policy-apex-pdp | ssl.keystore.certificate.chain = null 11:50:08 policy-apex-pdp | ssl.keystore.key = null 11:50:08 policy-apex-pdp | ssl.keystore.location = null 11:50:08 policy-apex-pdp | ssl.keystore.password = null 11:50:08 policy-apex-pdp | ssl.keystore.type = JKS 11:50:08 policy-apex-pdp | ssl.protocol = TLSv1.3 11:50:08 policy-apex-pdp | ssl.provider = null 11:50:08 policy-apex-pdp | ssl.secure.random.implementation = null 11:50:08 policy-apex-pdp | ssl.trustmanager.algorithm = PKIX 11:50:08 policy-apex-pdp | ssl.truststore.certificates = null 11:50:08 policy-apex-pdp | ssl.truststore.location = null 11:50:08 policy-apex-pdp | ssl.truststore.password = null 11:50:08 policy-apex-pdp | ssl.truststore.type = JKS 11:50:08 policy-apex-pdp | transaction.timeout.ms = 60000 11:50:08 policy-apex-pdp | transactional.id = null 11:50:08 policy-apex-pdp | value.serializer = class org.apache.kafka.common.serialization.StringSerializer 11:50:08 policy-apex-pdp | 11:50:08 policy-apex-pdp | [2024-02-21T11:48:12.783+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. 11:50:08 policy-apex-pdp | [2024-02-21T11:48:12.797+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 11:50:08 policy-apex-pdp | [2024-02-21T11:48:12.797+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 11:50:08 policy-apex-pdp | [2024-02-21T11:48:12.797+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1708516092797 11:50:08 policy-apex-pdp | [2024-02-21T11:48:12.798+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=2c65fed3-f5ef-460c-8a90-f0af658bc04c, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created 11:50:08 policy-apex-pdp | [2024-02-21T11:48:12.798+00:00|INFO|ServiceManager|main] service manager starting set alive 11:50:08 policy-apex-pdp | [2024-02-21T11:48:12.798+00:00|INFO|ServiceManager|main] service manager starting register pdp status context object 11:50:08 policy-apex-pdp | [2024-02-21T11:48:12.800+00:00|INFO|ServiceManager|main] service manager starting topic sinks 11:50:08 policy-apex-pdp | [2024-02-21T11:48:12.800+00:00|INFO|ServiceManager|main] service manager starting Pdp Status publisher 11:50:08 policy-apex-pdp | [2024-02-21T11:48:12.802+00:00|INFO|ServiceManager|main] service manager starting Register pdp update listener 11:50:08 policy-apex-pdp | [2024-02-21T11:48:12.802+00:00|INFO|ServiceManager|main] service manager starting Register pdp state change request dispatcher 11:50:08 policy-apex-pdp | [2024-02-21T11:48:12.802+00:00|INFO|ServiceManager|main] service manager starting Message Dispatcher 11:50:08 policy-apex-pdp | [2024-02-21T11:48:12.802+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=4c4f3bdc-0a77-42e6-89df-d332cf428198, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@e077866 11:50:08 policy-apex-pdp | [2024-02-21T11:48:12.802+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=4c4f3bdc-0a77-42e6-89df-d332cf428198, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: register: start not attempted 11:50:08 policy-apex-pdp | [2024-02-21T11:48:12.802+00:00|INFO|ServiceManager|main] service manager starting Create REST server 11:50:08 policy-apex-pdp | [2024-02-21T11:48:12.818+00:00|INFO|OrderedServiceImpl|Timer-0] ***** OrderedServiceImpl implementers: 11:50:08 policy-apex-pdp | [] 11:50:08 policy-apex-pdp | [2024-02-21T11:48:12.820+00:00|INFO|network|Timer-0] [OUT|KAFKA|policy-pdp-pap] 11:50:08 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"f552401e-39e5-4030-9fb5-1dc6c2b482be","timestampMs":1708516092804,"name":"apex-9c6c3a1f-b52f-49c2-8783-12eb5b1df64f","pdpGroup":"defaultGroup"} 11:50:08 policy-apex-pdp | [2024-02-21T11:48:12.954+00:00|INFO|ServiceManager|main] service manager starting Rest Server 11:50:08 policy-apex-pdp | [2024-02-21T11:48:12.954+00:00|INFO|ServiceManager|main] service manager starting 11:50:08 policy-apex-pdp | [2024-02-21T11:48:12.954+00:00|INFO|ServiceManager|main] service manager starting REST RestServerParameters 11:50:08 policy-apex-pdp | [2024-02-21T11:48:12.954+00:00|INFO|JettyServletServer|main] JettyJerseyServer [Jerseyservlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-3591009c==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@634b550e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-64c2b546==org.glassfish.jersey.servlet.ServletContainer@2d1bc350{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=Server@5ebd56e9{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@63f34b70{/,null,STOPPED}, connector=RestServerParameters@5d25e6bb{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=null, servlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-3591009c==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@634b550e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-64c2b546==org.glassfish.jersey.servlet.ServletContainer@2d1bc350{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 11:50:08 policy-apex-pdp | [2024-02-21T11:48:12.963+00:00|INFO|ServiceManager|main] service manager started 11:50:08 policy-apex-pdp | [2024-02-21T11:48:12.963+00:00|INFO|ServiceManager|main] service manager started 11:50:08 policy-apex-pdp | [2024-02-21T11:48:12.964+00:00|INFO|ApexStarterMain|main] Started policy-apex-pdp service successfully. 11:50:08 policy-apex-pdp | [2024-02-21T11:48:12.963+00:00|INFO|JettyServletServer|RestServerParameters-6969] JettyJerseyServer [Jerseyservlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-3591009c==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@634b550e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-64c2b546==org.glassfish.jersey.servlet.ServletContainer@2d1bc350{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=Server@5ebd56e9{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@63f34b70{/,null,STOPPED}, connector=RestServerParameters@5d25e6bb{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=Thread[RestServerParameters-6969,5,main], servlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-3591009c==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@634b550e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-64c2b546==org.glassfish.jersey.servlet.ServletContainer@2d1bc350{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 11:50:08 policy-apex-pdp | [2024-02-21T11:48:13.120+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: NROpzKGmRGeJsBLulqXClg 11:50:08 policy-apex-pdp | [2024-02-21T11:48:13.120+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4c4f3bdc-0a77-42e6-89df-d332cf428198-2, groupId=4c4f3bdc-0a77-42e6-89df-d332cf428198] Cluster ID: NROpzKGmRGeJsBLulqXClg 11:50:08 policy-apex-pdp | [2024-02-21T11:48:13.122+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 2 with epoch 0 11:50:08 policy-apex-pdp | [2024-02-21T11:48:13.122+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4c4f3bdc-0a77-42e6-89df-d332cf428198-2, groupId=4c4f3bdc-0a77-42e6-89df-d332cf428198] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) 11:50:08 policy-apex-pdp | [2024-02-21T11:48:13.130+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4c4f3bdc-0a77-42e6-89df-d332cf428198-2, groupId=4c4f3bdc-0a77-42e6-89df-d332cf428198] (Re-)joining group 11:50:08 policy-apex-pdp | [2024-02-21T11:48:13.146+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4c4f3bdc-0a77-42e6-89df-d332cf428198-2, groupId=4c4f3bdc-0a77-42e6-89df-d332cf428198] Request joining group due to: need to re-join with the given member-id: consumer-4c4f3bdc-0a77-42e6-89df-d332cf428198-2-e4302eb2-03de-4260-b77f-25ccc4bdfb26 11:50:08 policy-apex-pdp | [2024-02-21T11:48:13.146+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4c4f3bdc-0a77-42e6-89df-d332cf428198-2, groupId=4c4f3bdc-0a77-42e6-89df-d332cf428198] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) 11:50:08 policy-apex-pdp | [2024-02-21T11:48:13.146+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4c4f3bdc-0a77-42e6-89df-d332cf428198-2, groupId=4c4f3bdc-0a77-42e6-89df-d332cf428198] (Re-)joining group 11:50:08 policy-apex-pdp | [2024-02-21T11:48:13.581+00:00|INFO|GsonMessageBodyHandler|RestServerParameters-6969] Using GSON for REST calls 11:50:08 policy-apex-pdp | [2024-02-21T11:48:13.583+00:00|INFO|YamlMessageBodyHandler|RestServerParameters-6969] Accepting YAML for REST calls 11:50:08 kafka | remote.log.storage.manager.class.path = null 11:50:08 kafka | remote.log.storage.manager.impl.prefix = rsm.config. 11:50:08 kafka | remote.log.storage.system.enable = false 11:50:08 kafka | replica.fetch.backoff.ms = 1000 11:50:08 kafka | replica.fetch.max.bytes = 1048576 11:50:08 kafka | replica.fetch.min.bytes = 1 11:50:08 kafka | replica.fetch.response.max.bytes = 10485760 11:50:08 kafka | replica.fetch.wait.max.ms = 500 11:50:08 kafka | replica.high.watermark.checkpoint.interval.ms = 5000 11:50:08 kafka | replica.lag.time.max.ms = 30000 11:50:08 kafka | replica.selector.class = null 11:50:08 kafka | replica.socket.receive.buffer.bytes = 65536 11:50:08 kafka | replica.socket.timeout.ms = 30000 11:50:08 kafka | replication.quota.window.num = 11 11:50:08 kafka | replication.quota.window.size.seconds = 1 11:50:08 kafka | request.timeout.ms = 30000 11:50:08 kafka | reserved.broker.max.id = 1000 11:50:08 kafka | sasl.client.callback.handler.class = null 11:50:08 kafka | sasl.enabled.mechanisms = [GSSAPI] 11:50:08 kafka | sasl.jaas.config = null 11:50:08 kafka | sasl.kerberos.kinit.cmd = /usr/bin/kinit 11:50:08 kafka | sasl.kerberos.min.time.before.relogin = 60000 11:50:08 kafka | sasl.kerberos.principal.to.local.rules = [DEFAULT] 11:50:08 kafka | sasl.kerberos.service.name = null 11:50:08 kafka | sasl.kerberos.ticket.renew.jitter = 0.05 11:50:08 kafka | sasl.kerberos.ticket.renew.window.factor = 0.8 11:50:08 kafka | sasl.login.callback.handler.class = null 11:50:08 kafka | sasl.login.class = null 11:50:08 kafka | sasl.login.connect.timeout.ms = null 11:50:08 kafka | sasl.login.read.timeout.ms = null 11:50:08 kafka | sasl.login.refresh.buffer.seconds = 300 11:50:08 kafka | sasl.login.refresh.min.period.seconds = 60 11:50:08 kafka | sasl.login.refresh.window.factor = 0.8 11:50:08 kafka | sasl.login.refresh.window.jitter = 0.05 11:50:08 kafka | sasl.login.retry.backoff.max.ms = 10000 11:50:08 kafka | sasl.login.retry.backoff.ms = 100 11:50:08 kafka | sasl.mechanism.controller.protocol = GSSAPI 11:50:08 kafka | sasl.mechanism.inter.broker.protocol = GSSAPI 11:50:08 kafka | sasl.oauthbearer.clock.skew.seconds = 30 11:50:08 kafka | sasl.oauthbearer.expected.audience = null 11:50:08 kafka | sasl.oauthbearer.expected.issuer = null 11:50:08 kafka | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 11:50:08 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 11:50:08 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 11:50:08 kafka | sasl.oauthbearer.jwks.endpoint.url = null 11:50:08 kafka | sasl.oauthbearer.scope.claim.name = scope 11:50:08 kafka | sasl.oauthbearer.sub.claim.name = sub 11:50:08 kafka | sasl.oauthbearer.token.endpoint.url = null 11:50:08 kafka | sasl.server.callback.handler.class = null 11:50:08 kafka | sasl.server.max.receive.size = 524288 11:50:08 kafka | security.inter.broker.protocol = PLAINTEXT 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.37434586Z level=info msg="Migration successfully executed" id="create data_source table" duration=852.759µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.380494482Z level=info msg="Executing migration" id="add index data_source.account_id" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.381524003Z level=info msg="Migration successfully executed" id="add index data_source.account_id" duration=1.030521ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.3852276Z level=info msg="Executing migration" id="add unique index data_source.account_id_name" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.386475342Z level=info msg="Migration successfully executed" id="add unique index data_source.account_id_name" duration=1.247102ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.390222731Z level=info msg="Executing migration" id="drop index IDX_data_source_account_id - v1" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.391050459Z level=info msg="Migration successfully executed" id="drop index IDX_data_source_account_id - v1" duration=827.798µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.397396493Z level=info msg="Executing migration" id="drop index UQE_data_source_account_id_name - v1" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.398192351Z level=info msg="Migration successfully executed" id="drop index UQE_data_source_account_id_name - v1" duration=795.248µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.402494075Z level=info msg="Executing migration" id="Rename table data_source to data_source_v1 - v1" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.410482386Z level=info msg="Migration successfully executed" id="Rename table data_source to data_source_v1 - v1" duration=7.987951ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.414785969Z level=info msg="Executing migration" id="create data_source table v2" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.415500397Z level=info msg="Migration successfully executed" id="create data_source table v2" duration=711.478µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.422088483Z level=info msg="Executing migration" id="create index IDX_data_source_org_id - v2" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.423256915Z level=info msg="Migration successfully executed" id="create index IDX_data_source_org_id - v2" duration=1.168222ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.427008794Z level=info msg="Executing migration" id="create index UQE_data_source_org_id_name - v2" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.428524088Z level=info msg="Migration successfully executed" id="create index UQE_data_source_org_id_name - v2" duration=1.514634ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.432324478Z level=info msg="Executing migration" id="Drop old table data_source_v1 #2" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.432971084Z level=info msg="Migration successfully executed" id="Drop old table data_source_v1 #2" duration=642.077µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.440177326Z level=info msg="Executing migration" id="Add column with_credentials" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.443151507Z level=info msg="Migration successfully executed" id="Add column with_credentials" duration=2.972921ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.449503981Z level=info msg="Executing migration" id="Add secure json data column" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.451895766Z level=info msg="Migration successfully executed" id="Add secure json data column" duration=2.390395ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.456365671Z level=info msg="Executing migration" id="Update data_source table charset" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.456393971Z level=info msg="Migration successfully executed" id="Update data_source table charset" duration=27.181µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.462668994Z level=info msg="Executing migration" id="Update initial version to 1" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.462929368Z level=info msg="Migration successfully executed" id="Update initial version to 1" duration=254.304µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.467909107Z level=info msg="Executing migration" id="Add read_only data column" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.471604205Z level=info msg="Migration successfully executed" id="Add read_only data column" duration=3.693848ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.475903108Z level=info msg="Executing migration" id="Migrate logging ds to loki ds" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.476141891Z level=info msg="Migration successfully executed" id="Migrate logging ds to loki ds" duration=238.383µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.481351254Z level=info msg="Executing migration" id="Update json_data with nulls" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.481569626Z level=info msg="Migration successfully executed" id="Update json_data with nulls" duration=218.042µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.488247314Z level=info msg="Executing migration" id="Add uid column" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.49186071Z level=info msg="Migration successfully executed" id="Add uid column" duration=3.613906ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.495994182Z level=info msg="Executing migration" id="Update uid value" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.496235674Z level=info msg="Migration successfully executed" id="Update uid value" duration=243.112µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.500626539Z level=info msg="Executing migration" id="Add unique index datasource_org_id_uid" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.501871961Z level=info msg="Migration successfully executed" id="Add unique index datasource_org_id_uid" duration=1.244042ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.508034515Z level=info msg="Executing migration" id="add unique index datasource_org_id_is_default" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.509295337Z level=info msg="Migration successfully executed" id="add unique index datasource_org_id_is_default" duration=1.259302ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.513623091Z level=info msg="Executing migration" id="create api_key table" 11:50:08 mariadb | 2024-02-21 11:47:32+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.10.2+maria~ubu2204 started. 11:50:08 mariadb | 2024-02-21 11:47:32+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql' 11:50:08 mariadb | 2024-02-21 11:47:32+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.10.2+maria~ubu2204 started. 11:50:08 mariadb | 2024-02-21 11:47:32+00:00 [Note] [Entrypoint]: Initializing database files 11:50:08 mariadb | 2024-02-21 11:47:32 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) 11:50:08 mariadb | 2024-02-21 11:47:32 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF 11:50:08 mariadb | 2024-02-21 11:47:32 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. 11:50:08 mariadb | 11:50:08 mariadb | 11:50:08 mariadb | PLEASE REMEMBER TO SET A PASSWORD FOR THE MariaDB root USER ! 11:50:08 mariadb | To do so, start the server, then issue the following command: 11:50:08 mariadb | 11:50:08 mariadb | '/usr/bin/mysql_secure_installation' 11:50:08 mariadb | 11:50:08 mariadb | which will also give you the option of removing the test 11:50:08 mariadb | databases and anonymous user created by default. This is 11:50:08 mariadb | strongly recommended for production servers. 11:50:08 mariadb | 11:50:08 mariadb | See the MariaDB Knowledgebase at https://mariadb.com/kb 11:50:08 mariadb | 11:50:08 mariadb | Please report any problems at https://mariadb.org/jira 11:50:08 mariadb | 11:50:08 mariadb | The latest information about MariaDB is available at https://mariadb.org/. 11:50:08 mariadb | 11:50:08 mariadb | Consider joining MariaDB's strong and vibrant community: 11:50:08 mariadb | https://mariadb.org/get-involved/ 11:50:08 mariadb | 11:50:08 mariadb | 2024-02-21 11:47:34+00:00 [Note] [Entrypoint]: Database files initialized 11:50:08 mariadb | 2024-02-21 11:47:34+00:00 [Note] [Entrypoint]: Starting temporary server 11:50:08 mariadb | 2024-02-21 11:47:34+00:00 [Note] [Entrypoint]: Waiting for server startup 11:50:08 mariadb | 2024-02-21 11:47:34 0 [Note] mariadbd (server 10.10.2-MariaDB-1:10.10.2+maria~ubu2204) starting as process 96 ... 11:50:08 mariadb | 2024-02-21 11:47:34 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 11:50:08 mariadb | 2024-02-21 11:47:34 0 [Note] InnoDB: Number of transaction pools: 1 11:50:08 mariadb | 2024-02-21 11:47:34 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions 11:50:08 mariadb | 2024-02-21 11:47:34 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts) 11:50:08 mariadb | 2024-02-21 11:47:34 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) 11:50:08 mariadb | 2024-02-21 11:47:34 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF 11:50:08 mariadb | 2024-02-21 11:47:34 0 [Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB 11:50:08 mariadb | 2024-02-21 11:47:34 0 [Note] InnoDB: Completed initialization of buffer pool 11:50:08 mariadb | 2024-02-21 11:47:34 0 [Note] InnoDB: File system buffers for log disabled (block size=512 bytes) 11:50:08 mariadb | 2024-02-21 11:47:34 0 [Note] InnoDB: 128 rollback segments are active. 11:50:08 mariadb | 2024-02-21 11:47:34 0 [Note] InnoDB: Setting file './ibtmp1' size to 12.000MiB. Physically writing the file full; Please wait ... 11:50:08 mariadb | 2024-02-21 11:47:34 0 [Note] InnoDB: File './ibtmp1' size is now 12.000MiB. 11:50:08 mariadb | 2024-02-21 11:47:34 0 [Note] InnoDB: log sequence number 46590; transaction id 14 11:50:08 mariadb | 2024-02-21 11:47:34 0 [Note] Plugin 'FEEDBACK' is disabled. 11:50:08 mariadb | 2024-02-21 11:47:34 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. 11:50:08 mariadb | 2024-02-21 11:47:34 0 [Warning] 'user' entry 'root@mariadb' ignored in --skip-name-resolve mode. 11:50:08 mariadb | 2024-02-21 11:47:34 0 [Warning] 'proxies_priv' entry '@% root@mariadb' ignored in --skip-name-resolve mode. 11:50:08 mariadb | 2024-02-21 11:47:34 0 [Note] mariadbd: ready for connections. 11:50:08 mariadb | Version: '10.10.2-MariaDB-1:10.10.2+maria~ubu2204' socket: '/run/mysqld/mysqld.sock' port: 0 mariadb.org binary distribution 11:50:08 mariadb | 2024-02-21 11:47:35+00:00 [Note] [Entrypoint]: Temporary server started. 11:50:08 mariadb | 2024-02-21 11:47:37+00:00 [Note] [Entrypoint]: Creating user policy_user 11:50:08 mariadb | 2024-02-21 11:47:37+00:00 [Note] [Entrypoint]: Securing system users (equivalent to running mysql_secure_installation) 11:50:08 mariadb | 11:50:08 mariadb | 11:50:08 mariadb | 2024-02-21 11:47:37+00:00 [Warn] [Entrypoint]: /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/db.conf 11:50:08 mariadb | 2024-02-21 11:47:37+00:00 [Note] [Entrypoint]: /usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/db.sh 11:50:08 mariadb | #!/bin/bash -xv 11:50:08 policy-apex-pdp | [2024-02-21T11:48:16.151+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4c4f3bdc-0a77-42e6-89df-d332cf428198-2, groupId=4c4f3bdc-0a77-42e6-89df-d332cf428198] Successfully joined group with generation Generation{generationId=1, memberId='consumer-4c4f3bdc-0a77-42e6-89df-d332cf428198-2-e4302eb2-03de-4260-b77f-25ccc4bdfb26', protocol='range'} 11:50:08 policy-apex-pdp | [2024-02-21T11:48:16.157+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4c4f3bdc-0a77-42e6-89df-d332cf428198-2, groupId=4c4f3bdc-0a77-42e6-89df-d332cf428198] Finished assignment for group at generation 1: {consumer-4c4f3bdc-0a77-42e6-89df-d332cf428198-2-e4302eb2-03de-4260-b77f-25ccc4bdfb26=Assignment(partitions=[policy-pdp-pap-0])} 11:50:08 policy-apex-pdp | [2024-02-21T11:48:16.165+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4c4f3bdc-0a77-42e6-89df-d332cf428198-2, groupId=4c4f3bdc-0a77-42e6-89df-d332cf428198] Successfully synced group in generation Generation{generationId=1, memberId='consumer-4c4f3bdc-0a77-42e6-89df-d332cf428198-2-e4302eb2-03de-4260-b77f-25ccc4bdfb26', protocol='range'} 11:50:08 policy-apex-pdp | [2024-02-21T11:48:16.165+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4c4f3bdc-0a77-42e6-89df-d332cf428198-2, groupId=4c4f3bdc-0a77-42e6-89df-d332cf428198] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) 11:50:08 policy-apex-pdp | [2024-02-21T11:48:16.167+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4c4f3bdc-0a77-42e6-89df-d332cf428198-2, groupId=4c4f3bdc-0a77-42e6-89df-d332cf428198] Adding newly assigned partitions: policy-pdp-pap-0 11:50:08 policy-apex-pdp | [2024-02-21T11:48:16.175+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4c4f3bdc-0a77-42e6-89df-d332cf428198-2, groupId=4c4f3bdc-0a77-42e6-89df-d332cf428198] Found no committed offset for partition policy-pdp-pap-0 11:50:08 policy-apex-pdp | [2024-02-21T11:48:16.184+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-4c4f3bdc-0a77-42e6-89df-d332cf428198-2, groupId=4c4f3bdc-0a77-42e6-89df-d332cf428198] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. 11:50:08 policy-apex-pdp | [2024-02-21T11:48:32.803+00:00|INFO|network|Timer-0] [OUT|KAFKA|policy-pdp-pap] 11:50:08 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"01d87f90-6308-4c3c-a041-13f1212bcd60","timestampMs":1708516112802,"name":"apex-9c6c3a1f-b52f-49c2-8783-12eb5b1df64f","pdpGroup":"defaultGroup"} 11:50:08 policy-apex-pdp | [2024-02-21T11:48:32.832+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 11:50:08 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"01d87f90-6308-4c3c-a041-13f1212bcd60","timestampMs":1708516112802,"name":"apex-9c6c3a1f-b52f-49c2-8783-12eb5b1df64f","pdpGroup":"defaultGroup"} 11:50:08 policy-apex-pdp | [2024-02-21T11:48:32.835+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 11:50:08 policy-apex-pdp | [2024-02-21T11:48:32.980+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 11:50:08 policy-apex-pdp | {"source":"pap-901e61cf-d04a-4979-8ccb-af4a8d6816b5","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"5d9e63f8-1b57-4cf9-bb44-4b10e7689864","timestampMs":1708516112923,"name":"apex-9c6c3a1f-b52f-49c2-8783-12eb5b1df64f","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 11:50:08 policy-apex-pdp | [2024-02-21T11:48:32.987+00:00|WARN|Registry|KAFKA-source-policy-pdp-pap] replacing previously registered: object:pdp/status/publisher 11:50:08 policy-apex-pdp | [2024-02-21T11:48:32.988+00:00|INFO|network|Timer-1] [OUT|KAFKA|policy-pdp-pap] 11:50:08 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"27d31384-f318-4271-8ed2-1876fd20c6e1","timestampMs":1708516112987,"name":"apex-9c6c3a1f-b52f-49c2-8783-12eb5b1df64f","pdpGroup":"defaultGroup"} 11:50:08 policy-apex-pdp | [2024-02-21T11:48:32.989+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] 11:50:08 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"5d9e63f8-1b57-4cf9-bb44-4b10e7689864","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"018e3451-cab0-4ff2-972b-669a0efc11c4","timestampMs":1708516112988,"name":"apex-9c6c3a1f-b52f-49c2-8783-12eb5b1df64f","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 11:50:08 policy-apex-pdp | [2024-02-21T11:48:33.004+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 11:50:08 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"27d31384-f318-4271-8ed2-1876fd20c6e1","timestampMs":1708516112987,"name":"apex-9c6c3a1f-b52f-49c2-8783-12eb5b1df64f","pdpGroup":"defaultGroup"} 11:50:08 policy-apex-pdp | [2024-02-21T11:48:33.005+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 11:50:08 policy-apex-pdp | [2024-02-21T11:48:33.012+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 11:50:08 kafka | security.providers = null 11:50:08 kafka | server.max.startup.time.ms = 9223372036854775807 11:50:08 kafka | socket.connection.setup.timeout.max.ms = 30000 11:50:08 kafka | socket.connection.setup.timeout.ms = 10000 11:50:08 kafka | socket.listen.backlog.size = 50 11:50:08 kafka | socket.receive.buffer.bytes = 102400 11:50:08 kafka | socket.request.max.bytes = 104857600 11:50:08 kafka | socket.send.buffer.bytes = 102400 11:50:08 kafka | ssl.cipher.suites = [] 11:50:08 kafka | ssl.client.auth = none 11:50:08 kafka | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 11:50:08 kafka | ssl.endpoint.identification.algorithm = https 11:50:08 kafka | ssl.engine.factory.class = null 11:50:08 kafka | ssl.key.password = null 11:50:08 kafka | ssl.keymanager.algorithm = SunX509 11:50:08 kafka | ssl.keystore.certificate.chain = null 11:50:08 kafka | ssl.keystore.key = null 11:50:08 kafka | ssl.keystore.location = null 11:50:08 kafka | ssl.keystore.password = null 11:50:08 kafka | ssl.keystore.type = JKS 11:50:08 kafka | ssl.principal.mapping.rules = DEFAULT 11:50:08 kafka | ssl.protocol = TLSv1.3 11:50:08 kafka | ssl.provider = null 11:50:08 kafka | ssl.secure.random.implementation = null 11:50:08 kafka | ssl.trustmanager.algorithm = PKIX 11:50:08 kafka | ssl.truststore.certificates = null 11:50:08 kafka | ssl.truststore.location = null 11:50:08 kafka | ssl.truststore.password = null 11:50:08 kafka | ssl.truststore.type = JKS 11:50:08 kafka | transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000 11:50:08 kafka | transaction.max.timeout.ms = 900000 11:50:08 kafka | transaction.partition.verification.enable = true 11:50:08 kafka | transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 11:50:08 kafka | transaction.state.log.load.buffer.size = 5242880 11:50:08 kafka | transaction.state.log.min.isr = 2 11:50:08 kafka | transaction.state.log.num.partitions = 50 11:50:08 kafka | transaction.state.log.replication.factor = 3 11:50:08 kafka | transaction.state.log.segment.bytes = 104857600 11:50:08 kafka | transactional.id.expiration.ms = 604800000 11:50:08 kafka | unclean.leader.election.enable = false 11:50:08 kafka | unstable.api.versions.enable = false 11:50:08 kafka | zookeeper.clientCnxnSocket = null 11:50:08 kafka | zookeeper.connect = zookeeper:2181 11:50:08 kafka | zookeeper.connection.timeout.ms = null 11:50:08 kafka | zookeeper.max.in.flight.requests = 10 11:50:08 kafka | zookeeper.metadata.migration.enable = false 11:50:08 kafka | zookeeper.session.timeout.ms = 18000 11:50:08 kafka | zookeeper.set.acl = false 11:50:08 kafka | zookeeper.ssl.cipher.suites = null 11:50:08 kafka | zookeeper.ssl.client.enable = false 11:50:08 kafka | zookeeper.ssl.crl.enable = false 11:50:08 kafka | zookeeper.ssl.enabled.protocols = null 11:50:08 kafka | zookeeper.ssl.endpoint.identification.algorithm = HTTPS 11:50:08 kafka | zookeeper.ssl.keystore.location = null 11:50:08 kafka | zookeeper.ssl.keystore.password = null 11:50:08 kafka | zookeeper.ssl.keystore.type = null 11:50:08 kafka | zookeeper.ssl.ocsp.enable = false 11:50:08 kafka | zookeeper.ssl.protocol = TLSv1.2 11:50:08 kafka | zookeeper.ssl.truststore.location = null 11:50:08 kafka | zookeeper.ssl.truststore.password = null 11:50:08 kafka | zookeeper.ssl.truststore.type = null 11:50:08 kafka | (kafka.server.KafkaConfig) 11:50:08 kafka | [2024-02-21 11:47:40,306] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 11:50:08 kafka | [2024-02-21 11:47:40,307] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 11:50:08 kafka | [2024-02-21 11:47:40,310] INFO [ThrottledChannelReaper-ControllerMutation]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 11:50:08 kafka | [2024-02-21 11:47:40,307] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 11:50:08 kafka | [2024-02-21 11:47:40,346] INFO Loading logs from log dirs ArraySeq(/var/lib/kafka/data) (kafka.log.LogManager) 11:50:08 kafka | [2024-02-21 11:47:40,352] INFO No logs found to be loaded in /var/lib/kafka/data (kafka.log.LogManager) 11:50:08 kafka | [2024-02-21 11:47:40,361] INFO Loaded 0 logs in 14ms (kafka.log.LogManager) 11:50:08 kafka | [2024-02-21 11:47:40,363] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager) 11:50:08 kafka | [2024-02-21 11:47:40,364] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager) 11:50:08 kafka | [2024-02-21 11:47:40,425] INFO Starting the log cleaner (kafka.log.LogCleaner) 11:50:08 kafka | [2024-02-21 11:47:40,474] INFO [kafka-log-cleaner-thread-0]: Starting (kafka.log.LogCleaner$CleanerThread) 11:50:08 kafka | [2024-02-21 11:47:40,526] INFO [feature-zk-node-event-process-thread]: Starting (kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread) 11:50:08 kafka | [2024-02-21 11:47:40,541] INFO Feature ZK node at path: /feature does not exist (kafka.server.FinalizedFeatureChangeListener) 11:50:08 kafka | [2024-02-21 11:47:40,572] INFO [zk-broker-1-to-controller-forwarding-channel-manager]: Starting (kafka.server.BrokerToControllerRequestThread) 11:50:08 kafka | [2024-02-21 11:47:40,926] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) 11:50:08 kafka | [2024-02-21 11:47:40,952] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT) (kafka.network.SocketServer) 11:50:08 kafka | [2024-02-21 11:47:40,952] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) 11:50:08 kafka | [2024-02-21 11:47:40,957] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT_HOST) (kafka.network.SocketServer) 11:50:08 kafka | [2024-02-21 11:47:40,964] INFO [zk-broker-1-to-controller-alter-partition-channel-manager]: Starting (kafka.server.BrokerToControllerRequestThread) 11:50:08 kafka | [2024-02-21 11:47:40,990] INFO [ExpirationReaper-1-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 11:50:08 kafka | [2024-02-21 11:47:40,992] INFO [ExpirationReaper-1-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 11:50:08 kafka | [2024-02-21 11:47:40,993] INFO [ExpirationReaper-1-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 11:50:08 kafka | [2024-02-21 11:47:40,994] INFO [ExpirationReaper-1-ElectLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 11:50:08 kafka | [2024-02-21 11:47:40,997] INFO [ExpirationReaper-1-RemoteFetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 11:50:08 kafka | [2024-02-21 11:47:41,009] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler) 11:50:08 kafka | [2024-02-21 11:47:41,009] INFO [AddPartitionsToTxnSenderThread-1]: Starting (kafka.server.AddPartitionsToTxnManager) 11:50:08 kafka | [2024-02-21 11:47:41,032] INFO Creating /brokers/ids/1 (is it secure? false) (kafka.zk.KafkaZkClient) 11:50:08 kafka | [2024-02-21 11:47:41,057] INFO Stat of the created znode at /brokers/ids/1 is: 27,27,1708516061046,1708516061046,1,0,0,72057611275206657,258,0,27 11:50:08 kafka | (kafka.zk.KafkaZkClient) 11:50:08 kafka | [2024-02-21 11:47:41,057] INFO Registered broker 1 at path /brokers/ids/1 with addresses: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092, czxid (broker epoch): 27 (kafka.zk.KafkaZkClient) 11:50:08 kafka | [2024-02-21 11:47:41,123] INFO [ControllerEventThread controllerId=1] Starting (kafka.controller.ControllerEventManager$ControllerEventThread) 11:50:08 kafka | [2024-02-21 11:47:41,133] INFO [ExpirationReaper-1-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 11:50:08 kafka | [2024-02-21 11:47:41,142] INFO [ExpirationReaper-1-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 11:50:08 kafka | [2024-02-21 11:47:41,143] INFO [ExpirationReaper-1-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 11:50:08 kafka | [2024-02-21 11:47:41,162] INFO [GroupCoordinator 1]: Starting up. (kafka.coordinator.group.GroupCoordinator) 11:50:08 kafka | [2024-02-21 11:47:41,162] INFO Successfully created /controller_epoch with initial epoch 0 (kafka.zk.KafkaZkClient) 11:50:08 kafka | [2024-02-21 11:47:41,168] INFO [GroupCoordinator 1]: Startup complete. (kafka.coordinator.group.GroupCoordinator) 11:50:08 kafka | [2024-02-21 11:47:41,175] INFO [Controller id=1] 1 successfully elected as the controller. Epoch incremented to 1 and epoch zk version is now 1 (kafka.controller.KafkaController) 11:50:08 kafka | [2024-02-21 11:47:41,180] INFO [Controller id=1] Creating FeatureZNode at path: /feature with contents: FeatureZNode(2,Enabled,Map()) (kafka.controller.KafkaController) 11:50:08 kafka | [2024-02-21 11:47:41,186] INFO Feature ZK node created at path: /feature (kafka.server.FinalizedFeatureChangeListener) 11:50:08 kafka | [2024-02-21 11:47:41,195] INFO [TransactionCoordinator id=1] Starting up. (kafka.coordinator.transaction.TransactionCoordinator) 11:50:08 kafka | [2024-02-21 11:47:41,198] INFO [TxnMarkerSenderThread-1]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager) 11:50:08 kafka | [2024-02-21 11:47:41,198] INFO [TransactionCoordinator id=1] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator) 11:50:08 kafka | [2024-02-21 11:47:41,215] INFO [MetadataCache brokerId=1] Updated cache from existing None to latest Features(version=3.6-IV2, finalizedFeatures={}, finalizedFeaturesEpoch=0). (kafka.server.metadata.ZkMetadataCache) 11:50:08 kafka | [2024-02-21 11:47:41,216] INFO [Controller id=1] Registering handlers (kafka.controller.KafkaController) 11:50:08 kafka | [2024-02-21 11:47:41,220] INFO [Controller id=1] Deleting log dir event notifications (kafka.controller.KafkaController) 11:50:08 kafka | [2024-02-21 11:47:41,224] INFO [Controller id=1] Deleting isr change notifications (kafka.controller.KafkaController) 11:50:08 kafka | [2024-02-21 11:47:41,227] INFO [Controller id=1] Initializing controller context (kafka.controller.KafkaController) 11:50:08 kafka | [2024-02-21 11:47:41,241] INFO [ExpirationReaper-1-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 11:50:08 kafka | [2024-02-21 11:47:41,246] INFO [Controller id=1] Initialized broker epochs cache: HashMap(1 -> 27) (kafka.controller.KafkaController) 11:50:08 kafka | [2024-02-21 11:47:41,251] DEBUG [Controller id=1] Register BrokerModifications handler for Set(1) (kafka.controller.KafkaController) 11:50:08 kafka | [2024-02-21 11:47:41,257] DEBUG [Channel manager on controller 1]: Controller 1 trying to connect to broker 1 (kafka.controller.ControllerChannelManager) 11:50:08 kafka | [2024-02-21 11:47:41,272] INFO [RequestSendThread controllerId=1] Starting (kafka.controller.RequestSendThread) 11:50:08 kafka | [2024-02-21 11:47:41,273] INFO [Controller id=1] Currently active brokers in the cluster: Set(1) (kafka.controller.KafkaController) 11:50:08 kafka | [2024-02-21 11:47:41,274] INFO [Controller id=1] Currently shutting brokers in the cluster: HashSet() (kafka.controller.KafkaController) 11:50:08 kafka | [2024-02-21 11:47:41,274] INFO [Controller id=1] Current list of topics in the cluster: HashSet() (kafka.controller.KafkaController) 11:50:08 kafka | [2024-02-21 11:47:41,275] INFO [Controller id=1] Fetching topic deletions in progress (kafka.controller.KafkaController) 11:50:08 kafka | [2024-02-21 11:47:41,276] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread) 11:50:08 kafka | [2024-02-21 11:47:41,278] INFO [Controller id=1] List of topics to be deleted: (kafka.controller.KafkaController) 11:50:08 kafka | [2024-02-21 11:47:41,279] INFO [Controller id=1] List of topics ineligible for deletion: (kafka.controller.KafkaController) 11:50:08 kafka | [2024-02-21 11:47:41,280] INFO [Controller id=1] Initializing topic deletion manager (kafka.controller.KafkaController) 11:50:08 kafka | [2024-02-21 11:47:41,281] INFO [Topic Deletion Manager 1] Initializing manager with initial deletions: Set(), initial ineligible deletions: HashSet() (kafka.controller.TopicDeletionManager) 11:50:08 kafka | [2024-02-21 11:47:41,282] INFO [Controller id=1] Sending update metadata request (kafka.controller.KafkaController) 11:50:08 kafka | [2024-02-21 11:47:41,286] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Enabling request processing. (kafka.network.SocketServer) 11:50:08 kafka | [2024-02-21 11:47:41,287] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 0 partitions (state.change.logger) 11:50:08 kafka | [2024-02-21 11:47:41,291] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.DataPlaneAcceptor) 11:50:08 kafka | [2024-02-21 11:47:41,295] INFO [ReplicaStateMachine controllerId=1] Initializing replica state (kafka.controller.ZkReplicaStateMachine) 11:50:08 kafka | [2024-02-21 11:47:41,296] INFO [ReplicaStateMachine controllerId=1] Triggering online replica state changes (kafka.controller.ZkReplicaStateMachine) 11:50:08 kafka | [2024-02-21 11:47:41,302] INFO Awaiting socket connections on 0.0.0.0:29092. (kafka.network.DataPlaneAcceptor) 11:50:08 kafka | [2024-02-21 11:47:41,304] INFO [ReplicaStateMachine controllerId=1] Triggering offline replica state changes (kafka.controller.ZkReplicaStateMachine) 11:50:08 kafka | [2024-02-21 11:47:41,304] DEBUG [ReplicaStateMachine controllerId=1] Started replica state machine with initial state -> HashMap() (kafka.controller.ZkReplicaStateMachine) 11:50:08 kafka | [2024-02-21 11:47:41,309] INFO [PartitionStateMachine controllerId=1] Initializing partition state (kafka.controller.ZkPartitionStateMachine) 11:50:08 kafka | [2024-02-21 11:47:41,310] INFO [PartitionStateMachine controllerId=1] Triggering online partition state changes (kafka.controller.ZkPartitionStateMachine) 11:50:08 kafka | [2024-02-21 11:47:41,315] DEBUG [PartitionStateMachine controllerId=1] Started partition state machine with initial state -> HashMap() (kafka.controller.ZkPartitionStateMachine) 11:50:08 kafka | [2024-02-21 11:47:41,315] INFO [Controller id=1] Ready to serve as the new controller with epoch 1 (kafka.controller.KafkaController) 11:50:08 kafka | [2024-02-21 11:47:41,316] INFO Kafka version: 7.6.0-ccs (org.apache.kafka.common.utils.AppInfoParser) 11:50:08 kafka | [2024-02-21 11:47:41,316] INFO Kafka commitId: 1991cb733c81d6791626f88253a042b2ec835ab8 (org.apache.kafka.common.utils.AppInfoParser) 11:50:08 kafka | [2024-02-21 11:47:41,316] INFO Kafka startTimeMs: 1708516061310 (org.apache.kafka.common.utils.AppInfoParser) 11:50:08 kafka | [2024-02-21 11:47:41,318] INFO [KafkaServer id=1] started (kafka.server.KafkaServer) 11:50:08 kafka | [2024-02-21 11:47:41,326] INFO [RequestSendThread controllerId=1] Controller 1 connected to kafka:9092 (id: 1 rack: null) for sending state change requests (kafka.controller.RequestSendThread) 11:50:08 kafka | [2024-02-21 11:47:41,328] INFO [Controller id=1] Partitions undergoing preferred replica election: (kafka.controller.KafkaController) 11:50:08 kafka | [2024-02-21 11:47:41,329] INFO [Controller id=1] Partitions that completed preferred replica election: (kafka.controller.KafkaController) 11:50:08 kafka | [2024-02-21 11:47:41,329] INFO [Controller id=1] Skipping preferred replica election for partitions due to topic deletion: (kafka.controller.KafkaController) 11:50:08 kafka | [2024-02-21 11:47:41,331] INFO [Controller id=1] Resuming preferred replica election for partitions: (kafka.controller.KafkaController) 11:50:08 kafka | [2024-02-21 11:47:41,333] INFO [Controller id=1] Starting replica leader election (PREFERRED) for partitions triggered by ZkTriggered (kafka.controller.KafkaController) 11:50:08 kafka | [2024-02-21 11:47:41,361] INFO [Controller id=1] Starting the controller scheduler (kafka.controller.KafkaController) 11:50:08 kafka | [2024-02-21 11:47:41,420] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 0 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 11:50:08 kafka | [2024-02-21 11:47:41,473] INFO [zk-broker-1-to-controller-alter-partition-channel-manager]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) 11:50:08 kafka | [2024-02-21 11:47:41,491] INFO [zk-broker-1-to-controller-forwarding-channel-manager]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) 11:50:08 kafka | [2024-02-21 11:47:46,362] INFO [Controller id=1] Processing automatic preferred replica leader election (kafka.controller.KafkaController) 11:50:08 kafka | [2024-02-21 11:47:46,362] TRACE [Controller id=1] Checking need to trigger auto leader balancing (kafka.controller.KafkaController) 11:50:08 kafka | [2024-02-21 11:48:11,629] INFO Creating topic policy-pdp-pap with configuration {} and initial partition assignment HashMap(0 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) 11:50:08 kafka | [2024-02-21 11:48:11,629] INFO Creating topic __consumer_offsets with configuration {compression.type=producer, cleanup.policy=compact, segment.bytes=104857600} and initial partition assignment HashMap(0 -> ArrayBuffer(1), 1 -> ArrayBuffer(1), 2 -> ArrayBuffer(1), 3 -> ArrayBuffer(1), 4 -> ArrayBuffer(1), 5 -> ArrayBuffer(1), 6 -> ArrayBuffer(1), 7 -> ArrayBuffer(1), 8 -> ArrayBuffer(1), 9 -> ArrayBuffer(1), 10 -> ArrayBuffer(1), 11 -> ArrayBuffer(1), 12 -> ArrayBuffer(1), 13 -> ArrayBuffer(1), 14 -> ArrayBuffer(1), 15 -> ArrayBuffer(1), 16 -> ArrayBuffer(1), 17 -> ArrayBuffer(1), 18 -> ArrayBuffer(1), 19 -> ArrayBuffer(1), 20 -> ArrayBuffer(1), 21 -> ArrayBuffer(1), 22 -> ArrayBuffer(1), 23 -> ArrayBuffer(1), 24 -> ArrayBuffer(1), 25 -> ArrayBuffer(1), 26 -> ArrayBuffer(1), 27 -> ArrayBuffer(1), 28 -> ArrayBuffer(1), 29 -> ArrayBuffer(1), 30 -> ArrayBuffer(1), 31 -> ArrayBuffer(1), 32 -> ArrayBuffer(1), 33 -> ArrayBuffer(1), 34 -> ArrayBuffer(1), 35 -> ArrayBuffer(1), 36 -> ArrayBuffer(1), 37 -> ArrayBuffer(1), 38 -> ArrayBuffer(1), 39 -> ArrayBuffer(1), 40 -> ArrayBuffer(1), 41 -> ArrayBuffer(1), 42 -> ArrayBuffer(1), 43 -> ArrayBuffer(1), 44 -> ArrayBuffer(1), 45 -> ArrayBuffer(1), 46 -> ArrayBuffer(1), 47 -> ArrayBuffer(1), 48 -> ArrayBuffer(1), 49 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) 11:50:08 kafka | [2024-02-21 11:48:11,647] DEBUG [Controller id=1] There is no producerId block yet (Zk path version 0), creating the first block (kafka.controller.KafkaController) 11:50:08 kafka | [2024-02-21 11:48:11,656] INFO [Controller id=1] Acquired new producerId block ProducerIdsBlock(assignedBrokerId=1, firstProducerId=0, size=1000) by writing to Zk with path version 1 (kafka.controller.KafkaController) 11:50:08 kafka | [2024-02-21 11:48:11,678] INFO [Controller id=1] New topics: [Set(policy-pdp-pap, __consumer_offsets)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(policy-pdp-pap,Some(k7zjIZnaTc-jGITsmuEMwA),Map(policy-pdp-pap-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))), TopicIdReplicaAssignment(__consumer_offsets,Some(Kd_bSyAnRqaVsDMHCxs5MA),HashMap(__consumer_offsets-22 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-30 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-25 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-35 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-37 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-38 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-13 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-8 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-21 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-4 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-27 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-7 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-9 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-46 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-41 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-33 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-23 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-49 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-47 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-16 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-28 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-31 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-36 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-42 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-3 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-18 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-15 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-24 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-17 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-48 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-19 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-11 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-2 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-43 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-6 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-14 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-20 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-44 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-39 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-12 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-45 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-1 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-5 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-26 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-29 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-34 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-10 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-32 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-40 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) 11:50:08 kafka | [2024-02-21 11:48:11,680] INFO [Controller id=1] New partition creation callback for __consumer_offsets-22,__consumer_offsets-30,__consumer_offsets-25,__consumer_offsets-35,__consumer_offsets-38,__consumer_offsets-13,__consumer_offsets-8,__consumer_offsets-21,__consumer_offsets-4,__consumer_offsets-27,__consumer_offsets-7,__consumer_offsets-9,__consumer_offsets-46,__consumer_offsets-41,__consumer_offsets-33,__consumer_offsets-23,__consumer_offsets-49,__consumer_offsets-47,__consumer_offsets-16,__consumer_offsets-28,__consumer_offsets-31,__consumer_offsets-36,__consumer_offsets-42,__consumer_offsets-3,__consumer_offsets-18,__consumer_offsets-37,policy-pdp-pap-0,__consumer_offsets-15,__consumer_offsets-24,__consumer_offsets-17,__consumer_offsets-48,__consumer_offsets-19,__consumer_offsets-11,__consumer_offsets-2,__consumer_offsets-43,__consumer_offsets-6,__consumer_offsets-14,__consumer_offsets-20,__consumer_offsets-0,__consumer_offsets-44,__consumer_offsets-39,__consumer_offsets-12,__consumer_offsets-45,__consumer_offsets-1,__consumer_offsets-5,__consumer_offsets-26,__consumer_offsets-29,__consumer_offsets-34,__consumer_offsets-10,__consumer_offsets-32,__consumer_offsets-40 (kafka.controller.KafkaController) 11:50:08 kafka | [2024-02-21 11:48:11,683] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,683] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,683] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,684] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,684] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,684] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,684] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,684] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,684] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,684] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,684] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,684] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,685] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,685] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,685] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,685] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,685] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,685] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,686] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,686] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,686] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,686] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,686] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,686] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,686] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,686] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,687] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,687] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,687] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,687] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,687] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,687] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,687] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,687] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,687] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,688] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,688] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,688] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,688] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,688] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.516201397Z level=info msg="Migration successfully executed" id="create api_key table" duration=2.577386ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.520782333Z level=info msg="Executing migration" id="add index api_key.account_id" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.521647093Z level=info msg="Migration successfully executed" id="add index api_key.account_id" duration=864.51µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.528617043Z level=info msg="Executing migration" id="add index api_key.key" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.530040968Z level=info msg="Migration successfully executed" id="add index api_key.key" duration=1.432255ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.538308551Z level=info msg="Executing migration" id="add index api_key.account_id_name" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.539837126Z level=info msg="Migration successfully executed" id="add index api_key.account_id_name" duration=1.528276ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.54410469Z level=info msg="Executing migration" id="drop index IDX_api_key_account_id - v1" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.545332222Z level=info msg="Migration successfully executed" id="drop index IDX_api_key_account_id - v1" duration=1.228172ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.550703437Z level=info msg="Executing migration" id="drop index UQE_api_key_key - v1" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.551523265Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_key - v1" duration=819.988µs 11:50:08 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"5d9e63f8-1b57-4cf9-bb44-4b10e7689864","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"018e3451-cab0-4ff2-972b-669a0efc11c4","timestampMs":1708516112988,"name":"apex-9c6c3a1f-b52f-49c2-8783-12eb5b1df64f","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 11:50:08 policy-apex-pdp | [2024-02-21T11:48:33.013+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 11:50:08 policy-apex-pdp | [2024-02-21T11:48:33.049+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 11:50:08 policy-apex-pdp | {"source":"pap-901e61cf-d04a-4979-8ccb-af4a8d6816b5","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"3eca3e7e-cdd4-486e-a88b-6144b7e84349","timestampMs":1708516112924,"name":"apex-9c6c3a1f-b52f-49c2-8783-12eb5b1df64f","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 11:50:08 policy-apex-pdp | [2024-02-21T11:48:33.051+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] 11:50:08 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"3eca3e7e-cdd4-486e-a88b-6144b7e84349","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"8a0d37d6-a004-4f90-8449-2356eb23a00d","timestampMs":1708516113051,"name":"apex-9c6c3a1f-b52f-49c2-8783-12eb5b1df64f","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 11:50:08 policy-apex-pdp | [2024-02-21T11:48:33.057+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 11:50:08 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"3eca3e7e-cdd4-486e-a88b-6144b7e84349","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"8a0d37d6-a004-4f90-8449-2356eb23a00d","timestampMs":1708516113051,"name":"apex-9c6c3a1f-b52f-49c2-8783-12eb5b1df64f","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 11:50:08 policy-apex-pdp | [2024-02-21T11:48:33.057+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 11:50:08 policy-apex-pdp | [2024-02-21T11:48:33.089+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 11:50:08 policy-apex-pdp | {"source":"pap-901e61cf-d04a-4979-8ccb-af4a8d6816b5","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"55fb9ca5-eea7-4501-b4ab-db748e5d2263","timestampMs":1708516113070,"name":"apex-9c6c3a1f-b52f-49c2-8783-12eb5b1df64f","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 11:50:08 policy-apex-pdp | [2024-02-21T11:48:33.090+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] 11:50:08 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"55fb9ca5-eea7-4501-b4ab-db748e5d2263","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"ae2231d7-bdaa-47c2-b1f4-6d8d36d4863d","timestampMs":1708516113090,"name":"apex-9c6c3a1f-b52f-49c2-8783-12eb5b1df64f","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 11:50:08 policy-apex-pdp | [2024-02-21T11:48:33.099+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 11:50:08 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"55fb9ca5-eea7-4501-b4ab-db748e5d2263","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"ae2231d7-bdaa-47c2-b1f4-6d8d36d4863d","timestampMs":1708516113090,"name":"apex-9c6c3a1f-b52f-49c2-8783-12eb5b1df64f","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 11:50:08 policy-apex-pdp | [2024-02-21T11:48:33.099+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 11:50:08 policy-apex-pdp | [2024-02-21T11:48:56.153+00:00|INFO|RequestLog|qtp1068445309-33] 172.17.0.4 - policyadmin [21/Feb/2024:11:48:56 +0000] "GET /metrics HTTP/1.1" 200 10647 "-" "Prometheus/2.49.1" 11:50:08 policy-apex-pdp | [2024-02-21T11:49:56.079+00:00|INFO|RequestLog|qtp1068445309-28] 172.17.0.4 - policyadmin [21/Feb/2024:11:49:56 +0000] "GET /metrics HTTP/1.1" 200 10650 "-" "Prometheus/2.49.1" 11:50:08 mariadb | # Copyright 2019,2021 AT&T Intellectual Property. All rights reserved 11:50:08 mariadb | # Modifications Copyright (c) 2022 Nordix Foundation. 11:50:08 mariadb | # 11:50:08 mariadb | # Licensed under the Apache License, Version 2.0 (the "License"); 11:50:08 mariadb | # you may not use this file except in compliance with the License. 11:50:08 mariadb | # You may obtain a copy of the License at 11:50:08 mariadb | # 11:50:08 mariadb | # http://www.apache.org/licenses/LICENSE-2.0 11:50:08 mariadb | # 11:50:08 mariadb | # Unless required by applicable law or agreed to in writing, software 11:50:08 mariadb | # distributed under the License is distributed on an "AS IS" BASIS, 11:50:08 mariadb | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 11:50:08 mariadb | # See the License for the specific language governing permissions and 11:50:08 mariadb | # limitations under the License. 11:50:08 mariadb | 11:50:08 mariadb | for db in migration pooling policyadmin operationshistory clampacm policyclamp 11:50:08 mariadb | do 11:50:08 mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "CREATE DATABASE IF NOT EXISTS ${db};" 11:50:08 mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "GRANT ALL PRIVILEGES ON \`${db}\`.* TO '${MYSQL_USER}'@'%' ;" 11:50:08 mariadb | done 11:50:08 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 11:50:08 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS migration;' 11:50:08 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `migration`.* TO '\''policy_user'\''@'\''%'\'' ;' 11:50:08 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 11:50:08 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS pooling;' 11:50:08 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `pooling`.* TO '\''policy_user'\''@'\''%'\'' ;' 11:50:08 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 11:50:08 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS policyadmin;' 11:50:08 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `policyadmin`.* TO '\''policy_user'\''@'\''%'\'' ;' 11:50:08 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 11:50:08 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS operationshistory;' 11:50:08 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `operationshistory`.* TO '\''policy_user'\''@'\''%'\'' ;' 11:50:08 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 11:50:08 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS clampacm;' 11:50:08 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `clampacm`.* TO '\''policy_user'\''@'\''%'\'' ;' 11:50:08 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 11:50:08 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS policyclamp;' 11:50:08 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `policyclamp`.* TO '\''policy_user'\''@'\''%'\'' ;' 11:50:08 mariadb | 11:50:08 mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "FLUSH PRIVILEGES;" 11:50:08 mariadb | + mysql -uroot -psecret --execute 'FLUSH PRIVILEGES;' 11:50:08 kafka | [2024-02-21 11:48:11,688] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,688] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,688] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,689] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,689] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,689] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,689] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,689] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,689] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,689] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,689] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,689] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,694] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NonExistentReplica to NewReplica (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,694] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NonExistentReplica to NewReplica (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,694] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NonExistentReplica to NewReplica (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,694] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NonExistentReplica to NewReplica (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,694] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NonExistentReplica to NewReplica (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,694] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NonExistentReplica to NewReplica (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,694] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NonExistentReplica to NewReplica (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,694] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NonExistentReplica to NewReplica (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,694] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NonExistentReplica to NewReplica (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,694] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NonExistentReplica to NewReplica (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,694] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NonExistentReplica to NewReplica (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,694] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NonExistentReplica to NewReplica (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,694] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NonExistentReplica to NewReplica (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,694] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NonExistentReplica to NewReplica (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,694] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NonExistentReplica to NewReplica (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,694] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NonExistentReplica to NewReplica (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,694] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NonExistentReplica to NewReplica (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,694] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NonExistentReplica to NewReplica (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,694] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NonExistentReplica to NewReplica (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,694] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NonExistentReplica to NewReplica (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,694] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NonExistentReplica to NewReplica (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,694] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NonExistentReplica to NewReplica (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,694] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NonExistentReplica to NewReplica (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,694] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NonExistentReplica to NewReplica (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,694] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NonExistentReplica to NewReplica (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,695] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NonExistentReplica to NewReplica (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,695] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NonExistentReplica to NewReplica (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,695] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NonExistentReplica to NewReplica (state.change.logger) 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.555197823Z level=info msg="Executing migration" id="drop index UQE_api_key_account_id_name - v1" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.55597116Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_account_id_name - v1" duration=773.267µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.56089176Z level=info msg="Executing migration" id="Rename table api_key to api_key_v1 - v1" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.569319736Z level=info msg="Migration successfully executed" id="Rename table api_key to api_key_v1 - v1" duration=8.427546ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.575360767Z level=info msg="Executing migration" id="create api_key table v2" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.575861142Z level=info msg="Migration successfully executed" id="create api_key table v2" duration=500.025µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.579596659Z level=info msg="Executing migration" id="create index IDX_api_key_org_id - v2" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.581113345Z level=info msg="Migration successfully executed" id="create index IDX_api_key_org_id - v2" duration=1.513296ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.586464279Z level=info msg="Executing migration" id="create index UQE_api_key_key - v2" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.587764442Z level=info msg="Migration successfully executed" id="create index UQE_api_key_key - v2" duration=1.299873ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.593333658Z level=info msg="Executing migration" id="create index UQE_api_key_org_id_name - v2" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.594187017Z level=info msg="Migration successfully executed" id="create index UQE_api_key_org_id_name - v2" duration=853.099µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.598192798Z level=info msg="Executing migration" id="copy api_key v1 to v2" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.598795684Z level=info msg="Migration successfully executed" id="copy api_key v1 to v2" duration=602.146µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.60334482Z level=info msg="Executing migration" id="Drop old table api_key_v1" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.604232419Z level=info msg="Migration successfully executed" id="Drop old table api_key_v1" duration=886.829µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.610115758Z level=info msg="Executing migration" id="Update api_key table charset" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.610145009Z level=info msg="Migration successfully executed" id="Update api_key table charset" duration=29.981µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.61516928Z level=info msg="Executing migration" id="Add expires to api_key table" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.6191482Z level=info msg="Migration successfully executed" id="Add expires to api_key table" duration=3.97792ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.624181731Z level=info msg="Executing migration" id="Add service account foreign key" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.627113361Z level=info msg="Migration successfully executed" id="Add service account foreign key" duration=2.93397ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.631431495Z level=info msg="Executing migration" id="set service account foreign key to nil if 0" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.631707727Z level=info msg="Migration successfully executed" id="set service account foreign key to nil if 0" duration=274.132µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.637042042Z level=info msg="Executing migration" id="Add last_used_at to api_key table" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.639538957Z level=info msg="Migration successfully executed" id="Add last_used_at to api_key table" duration=2.496655ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.644393245Z level=info msg="Executing migration" id="Add is_revoked column to api_key table" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.647012022Z level=info msg="Migration successfully executed" id="Add is_revoked column to api_key table" duration=2.618087ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.653751041Z level=info msg="Executing migration" id="create dashboard_snapshot table v4" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.654496178Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v4" duration=744.577µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.659121415Z level=info msg="Executing migration" id="drop table dashboard_snapshot_v4 #1" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.660028264Z level=info msg="Migration successfully executed" id="drop table dashboard_snapshot_v4 #1" duration=900.389µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.697733156Z level=info msg="Executing migration" id="create dashboard_snapshot table v5 #2" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.699290342Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v5 #2" duration=1.549456ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.704388014Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_key - v5" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.705238992Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_key - v5" duration=850.888µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.708683407Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_delete_key - v5" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.709513136Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_delete_key - v5" duration=829.769µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.715999271Z level=info msg="Executing migration" id="create index IDX_dashboard_snapshot_user_id - v5" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.717120202Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_snapshot_user_id - v5" duration=1.120141ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.722029262Z level=info msg="Executing migration" id="alter dashboard_snapshot to mediumtext v2" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.722272805Z level=info msg="Migration successfully executed" id="alter dashboard_snapshot to mediumtext v2" duration=243.193µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.731548178Z level=info msg="Executing migration" id="Update dashboard_snapshot table charset" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.731667141Z level=info msg="Migration successfully executed" id="Update dashboard_snapshot table charset" duration=137.073µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.735542099Z level=info msg="Executing migration" id="Add column external_delete_url to dashboard_snapshots table" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.740708392Z level=info msg="Migration successfully executed" id="Add column external_delete_url to dashboard_snapshots table" duration=5.170832ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.744970174Z level=info msg="Executing migration" id="Add encrypted dashboard json column" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.747777623Z level=info msg="Migration successfully executed" id="Add encrypted dashboard json column" duration=2.806849ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.751203717Z level=info msg="Executing migration" id="Change dashboard_encrypted column to MEDIUMBLOB" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.751299869Z level=info msg="Migration successfully executed" id="Change dashboard_encrypted column to MEDIUMBLOB" duration=96.102µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.755093597Z level=info msg="Executing migration" id="create quota table v1" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.755803564Z level=info msg="Migration successfully executed" id="create quota table v1" duration=705.897µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.760653033Z level=info msg="Executing migration" id="create index UQE_quota_org_id_user_id_target - v1" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.762257029Z level=info msg="Migration successfully executed" id="create index UQE_quota_org_id_user_id_target - v1" duration=1.603026ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.767078558Z level=info msg="Executing migration" id="Update quota table charset" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.767164789Z level=info msg="Migration successfully executed" id="Update quota table charset" duration=82.881µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.776031048Z level=info msg="Executing migration" id="create plugin_setting table" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.776760066Z level=info msg="Migration successfully executed" id="create plugin_setting table" duration=728.548µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.780277872Z level=info msg="Executing migration" id="create index UQE_plugin_setting_org_id_plugin_id - v1" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.781745856Z level=info msg="Migration successfully executed" id="create index UQE_plugin_setting_org_id_plugin_id - v1" duration=1.467304ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.785239942Z level=info msg="Executing migration" id="Add column plugin_version to plugin_settings" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.789192301Z level=info msg="Migration successfully executed" id="Add column plugin_version to plugin_settings" duration=3.952679ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.795086341Z level=info msg="Executing migration" id="Update plugin_setting table charset" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.795120161Z level=info msg="Migration successfully executed" id="Update plugin_setting table charset" duration=34.88µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.798353844Z level=info msg="Executing migration" id="create session table" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.799245014Z level=info msg="Migration successfully executed" id="create session table" duration=895.74µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.802818Z level=info msg="Executing migration" id="Drop old table playlist table" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.802971221Z level=info msg="Migration successfully executed" id="Drop old table playlist table" duration=153.331µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.80880672Z level=info msg="Executing migration" id="Drop old table playlist_item table" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.809036252Z level=info msg="Migration successfully executed" id="Drop old table playlist_item table" duration=234.202µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.815438557Z level=info msg="Executing migration" id="create playlist table v2" 11:50:08 mariadb | mysql -u"${MYSQL_USER}" -p"${MYSQL_PASSWORD}" -f policyclamp < /tmp/policy-clamp-create-tables.sql 11:50:08 mariadb | + mysql -upolicy_user -ppolicy_user -f policyclamp 11:50:08 mariadb | 11:50:08 mariadb | 2024-02-21 11:47:38+00:00 [Note] [Entrypoint]: Stopping temporary server 11:50:08 mariadb | 2024-02-21 11:47:38 0 [Note] mariadbd (initiated by: unknown): Normal shutdown 11:50:08 mariadb | 2024-02-21 11:47:38 0 [Note] InnoDB: FTS optimize thread exiting. 11:50:08 mariadb | 2024-02-21 11:47:38 0 [Note] InnoDB: Starting shutdown... 11:50:08 mariadb | 2024-02-21 11:47:38 0 [Note] InnoDB: Dumping buffer pool(s) to /var/lib/mysql/ib_buffer_pool 11:50:08 mariadb | 2024-02-21 11:47:38 0 [Note] InnoDB: Buffer pool(s) dump completed at 240221 11:47:38 11:50:08 mariadb | 2024-02-21 11:47:38 0 [Note] InnoDB: Removed temporary tablespace data file: "./ibtmp1" 11:50:08 mariadb | 2024-02-21 11:47:38 0 [Note] InnoDB: Shutdown completed; log sequence number 328945; transaction id 298 11:50:08 mariadb | 2024-02-21 11:47:38 0 [Note] mariadbd: Shutdown complete 11:50:08 mariadb | 11:50:08 mariadb | 2024-02-21 11:47:38+00:00 [Note] [Entrypoint]: Temporary server stopped 11:50:08 mariadb | 11:50:08 mariadb | 2024-02-21 11:47:38+00:00 [Note] [Entrypoint]: MariaDB init process done. Ready for start up. 11:50:08 mariadb | 11:50:08 mariadb | 2024-02-21 11:47:38 0 [Note] mariadbd (server 10.10.2-MariaDB-1:10.10.2+maria~ubu2204) starting as process 1 ... 11:50:08 mariadb | 2024-02-21 11:47:38 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 11:50:08 mariadb | 2024-02-21 11:47:38 0 [Note] InnoDB: Number of transaction pools: 1 11:50:08 mariadb | 2024-02-21 11:47:38 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions 11:50:08 mariadb | 2024-02-21 11:47:38 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts) 11:50:08 mariadb | 2024-02-21 11:47:38 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) 11:50:08 mariadb | 2024-02-21 11:47:38 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF 11:50:08 mariadb | 2024-02-21 11:47:38 0 [Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB 11:50:08 mariadb | 2024-02-21 11:47:38 0 [Note] InnoDB: Completed initialization of buffer pool 11:50:08 mariadb | 2024-02-21 11:47:38 0 [Note] InnoDB: File system buffers for log disabled (block size=512 bytes) 11:50:08 mariadb | 2024-02-21 11:47:38 0 [Note] InnoDB: 128 rollback segments are active. 11:50:08 mariadb | 2024-02-21 11:47:38 0 [Note] InnoDB: Setting file './ibtmp1' size to 12.000MiB. Physically writing the file full; Please wait ... 11:50:08 mariadb | 2024-02-21 11:47:38 0 [Note] InnoDB: File './ibtmp1' size is now 12.000MiB. 11:50:08 mariadb | 2024-02-21 11:47:38 0 [Note] InnoDB: log sequence number 328945; transaction id 299 11:50:08 mariadb | 2024-02-21 11:47:38 0 [Note] InnoDB: Loading buffer pool(s) from /var/lib/mysql/ib_buffer_pool 11:50:08 mariadb | 2024-02-21 11:47:38 0 [Note] Plugin 'FEEDBACK' is disabled. 11:50:08 mariadb | 2024-02-21 11:47:38 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. 11:50:08 mariadb | 2024-02-21 11:47:38 0 [Warning] You need to use --log-bin to make --expire-logs-days or --binlog-expire-logs-seconds work. 11:50:08 mariadb | 2024-02-21 11:47:38 0 [Note] Server socket created on IP: '0.0.0.0'. 11:50:08 mariadb | 2024-02-21 11:47:38 0 [Note] Server socket created on IP: '::'. 11:50:08 mariadb | 2024-02-21 11:47:38 0 [Note] mariadbd: ready for connections. 11:50:08 mariadb | Version: '10.10.2-MariaDB-1:10.10.2+maria~ubu2204' socket: '/run/mysqld/mysqld.sock' port: 3306 mariadb.org binary distribution 11:50:08 mariadb | 2024-02-21 11:47:38 0 [Note] InnoDB: Buffer pool(s) load completed at 240221 11:47:38 11:50:08 mariadb | 2024-02-21 11:47:39 3 [Warning] Aborted connection 3 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.8' (This connection closed normally without authentication) 11:50:08 prometheus | ts=2024-02-21T11:47:40.595Z caller=main.go:544 level=info msg="No time or size retention was set so using the default time retention" duration=15d 11:50:08 prometheus | ts=2024-02-21T11:47:40.595Z caller=main.go:588 level=info msg="Starting Prometheus Server" mode=server version="(version=2.49.1, branch=HEAD, revision=43e14844a33b65e2a396e3944272af8b3a494071)" 11:50:08 prometheus | ts=2024-02-21T11:47:40.595Z caller=main.go:593 level=info build_context="(go=go1.21.6, platform=linux/amd64, user=root@6d5f4c649d25, date=20240115-16:58:43, tags=netgo,builtinassets,stringlabels)" 11:50:08 prometheus | ts=2024-02-21T11:47:40.595Z caller=main.go:594 level=info host_details="(Linux 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 prometheus (none))" 11:50:08 prometheus | ts=2024-02-21T11:47:40.595Z caller=main.go:595 level=info fd_limits="(soft=1048576, hard=1048576)" 11:50:08 prometheus | ts=2024-02-21T11:47:40.595Z caller=main.go:596 level=info vm_limits="(soft=unlimited, hard=unlimited)" 11:50:08 prometheus | ts=2024-02-21T11:47:40.597Z caller=web.go:565 level=info component=web msg="Start listening for connections" address=0.0.0.0:9090 11:50:08 prometheus | ts=2024-02-21T11:47:40.598Z caller=main.go:1039 level=info msg="Starting TSDB ..." 11:50:08 prometheus | ts=2024-02-21T11:47:40.603Z caller=tls_config.go:274 level=info component=web msg="Listening on" address=[::]:9090 11:50:08 prometheus | ts=2024-02-21T11:47:40.604Z caller=tls_config.go:277 level=info component=web msg="TLS is disabled." http2=false address=[::]:9090 11:50:08 prometheus | ts=2024-02-21T11:47:40.607Z caller=head.go:606 level=info component=tsdb msg="Replaying on-disk memory mappable chunks if any" 11:50:08 prometheus | ts=2024-02-21T11:47:40.607Z caller=head.go:687 level=info component=tsdb msg="On-disk memory mappable chunks replay completed" duration=4.91µs 11:50:08 prometheus | ts=2024-02-21T11:47:40.607Z caller=head.go:695 level=info component=tsdb msg="Replaying WAL, this may take a while" 11:50:08 prometheus | ts=2024-02-21T11:47:40.607Z caller=head.go:766 level=info component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0 11:50:08 prometheus | ts=2024-02-21T11:47:40.607Z caller=head.go:803 level=info component=tsdb msg="WAL replay completed" checkpoint_replay_duration=23.471µs wal_replay_duration=455.926µs wbl_replay_duration=260ns total_replay_duration=508.707µs 11:50:08 prometheus | ts=2024-02-21T11:47:40.609Z caller=main.go:1060 level=info fs_type=EXT4_SUPER_MAGIC 11:50:08 prometheus | ts=2024-02-21T11:47:40.609Z caller=main.go:1063 level=info msg="TSDB started" 11:50:08 prometheus | ts=2024-02-21T11:47:40.609Z caller=main.go:1245 level=info msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml 11:50:08 prometheus | ts=2024-02-21T11:47:40.610Z caller=main.go:1282 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=987.614µs db_storage=1.29µs remote_storage=2.56µs web_handler=1.03µs query_engine=1.16µs scrape=185.533µs scrape_sd=101.311µs notify=36.481µs notify_sd=13.5µs rules=1.79µs tracing=4.72µs 11:50:08 prometheus | ts=2024-02-21T11:47:40.610Z caller=main.go:1024 level=info msg="Server is ready to receive web requests." 11:50:08 prometheus | ts=2024-02-21T11:47:40.610Z caller=manager.go:146 level=info component="rule manager" msg="Starting rule manager..." 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.816591209Z level=info msg="Migration successfully executed" id="create playlist table v2" duration=1.152442ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.822451058Z level=info msg="Executing migration" id="create playlist item table v2" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.82364678Z level=info msg="Migration successfully executed" id="create playlist item table v2" duration=1.195512ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.82753611Z level=info msg="Executing migration" id="Update playlist table charset" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.827623531Z level=info msg="Migration successfully executed" id="Update playlist table charset" duration=88.611µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.831495749Z level=info msg="Executing migration" id="Update playlist_item table charset" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.83157994Z level=info msg="Migration successfully executed" id="Update playlist_item table charset" duration=86.201µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.835205136Z level=info msg="Executing migration" id="Add playlist column created_at" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.838660752Z level=info msg="Migration successfully executed" id="Add playlist column created_at" duration=3.454976ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.842025816Z level=info msg="Executing migration" id="Add playlist column updated_at" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.845236358Z level=info msg="Migration successfully executed" id="Add playlist column updated_at" duration=3.206392ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.84834084Z level=info msg="Executing migration" id="drop preferences table v2" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.848448291Z level=info msg="Migration successfully executed" id="drop preferences table v2" duration=107.851µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.852958957Z level=info msg="Executing migration" id="drop preferences table v3" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.853071438Z level=info msg="Migration successfully executed" id="drop preferences table v3" duration=108.641µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.856512293Z level=info msg="Executing migration" id="create preferences table v3" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.85728517Z level=info msg="Migration successfully executed" id="create preferences table v3" duration=772.527µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.862666564Z level=info msg="Executing migration" id="Update preferences table charset" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.862694645Z level=info msg="Migration successfully executed" id="Update preferences table charset" duration=28.871µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.866382313Z level=info msg="Executing migration" id="Add column team_id in preferences" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.871296542Z level=info msg="Migration successfully executed" id="Add column team_id in preferences" duration=4.912169ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.875202422Z level=info msg="Executing migration" id="Update team_id column values in preferences" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.875471464Z level=info msg="Migration successfully executed" id="Update team_id column values in preferences" duration=269.492µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.879396694Z level=info msg="Executing migration" id="Add column week_start in preferences" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.882539535Z level=info msg="Migration successfully executed" id="Add column week_start in preferences" duration=3.142421ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.88587111Z level=info msg="Executing migration" id="Add column preferences.json_data" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.889066422Z level=info msg="Migration successfully executed" id="Add column preferences.json_data" duration=3.194842ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.894306785Z level=info msg="Executing migration" id="alter preferences.json_data to mediumtext v1" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.894397606Z level=info msg="Migration successfully executed" id="alter preferences.json_data to mediumtext v1" duration=91.071µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.898084833Z level=info msg="Executing migration" id="Add preferences index org_id" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.899068523Z level=info msg="Migration successfully executed" id="Add preferences index org_id" duration=983.38µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.907412447Z level=info msg="Executing migration" id="Add preferences index user_id" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.908908362Z level=info msg="Migration successfully executed" id="Add preferences index user_id" duration=1.495465ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.912939033Z level=info msg="Executing migration" id="create alert table v1" 11:50:08 policy-db-migrator | Waiting for mariadb port 3306... 11:50:08 policy-db-migrator | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused 11:50:08 policy-db-migrator | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused 11:50:08 policy-db-migrator | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused 11:50:08 policy-db-migrator | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused 11:50:08 policy-db-migrator | Connection to mariadb (172.17.0.3) 3306 port [tcp/mysql] succeeded! 11:50:08 policy-db-migrator | 321 blocks 11:50:08 policy-db-migrator | Preparing upgrade release version: 0800 11:50:08 policy-db-migrator | Preparing upgrade release version: 0900 11:50:08 policy-db-migrator | Preparing upgrade release version: 1000 11:50:08 policy-db-migrator | Preparing upgrade release version: 1100 11:50:08 policy-db-migrator | Preparing upgrade release version: 1200 11:50:08 policy-db-migrator | Preparing upgrade release version: 1300 11:50:08 policy-db-migrator | Done 11:50:08 policy-db-migrator | name version 11:50:08 policy-db-migrator | policyadmin 0 11:50:08 policy-db-migrator | policyadmin: upgrade available: 0 -> 1300 11:50:08 policy-db-migrator | upgrade: 0 -> 1300 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | > upgrade 0100-jpapdpgroup_properties.sql 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpgroup_properties (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PROPERTIES VARCHAR(255) DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) DEFAULT NULL) 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpstatistics_enginestats (AVERAGEEXECUTIONTIME DOUBLE DEFAULT NULL, ENGINEID VARCHAR(255) DEFAULT NULL, ENGINETIMESTAMP BIGINT DEFAULT NULL, ENGINEWORKERSTATE INT DEFAULT NULL, EVENTCOUNT BIGINT DEFAULT NULL, LASTENTERTIME BIGINT DEFAULT NULL, LASTEXECUTIONTIME BIGINT DEFAULT NULL, LASTSTART BIGINT DEFAULT NULL, UPTIME BIGINT DEFAULT NULL, timeStamp datetime DEFAULT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL) 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | > upgrade 0120-jpapdpsubgroup_policies.sql 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_policies (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL) 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | > upgrade 0130-jpapdpsubgroup_properties.sql 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_properties (parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL, PROPERTIES VARCHAR(255) DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) DEFAULT NULL) 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | > upgrade 0140-jpapdpsubgroup_supportedpolicytypes.sql 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_supportedpolicytypes (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL) 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | > upgrade 0150-jpatoscacapabilityassignment_attributes.sql 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_attributes (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, ATTRIBUTES LONGTEXT DEFAULT NULL, ATTRIBUTES_KEY VARCHAR(255) DEFAULT NULL) 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | > upgrade 0160-jpatoscacapabilityassignment_metadata.sql 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | 11:50:08 mariadb | 2024-02-21 11:47:39 4 [Warning] Aborted connection 4 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.6' (This connection closed normally without authentication) 11:50:08 mariadb | 2024-02-21 11:47:39 23 [Warning] Aborted connection 23 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.10' (This connection closed normally without authentication) 11:50:08 mariadb | 2024-02-21 11:47:39 24 [Warning] Aborted connection 24 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.9' (This connection closed normally without authentication) 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.913866793Z level=info msg="Migration successfully executed" id="create alert table v1" duration=927.4µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.917108985Z level=info msg="Executing migration" id="add index alert org_id & id " 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.918077195Z level=info msg="Migration successfully executed" id="add index alert org_id & id " duration=970.57µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.921747922Z level=info msg="Executing migration" id="add index alert state" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.922861144Z level=info msg="Migration successfully executed" id="add index alert state" duration=1.108322ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.92649462Z level=info msg="Executing migration" id="add index alert dashboard_id" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.927668542Z level=info msg="Migration successfully executed" id="add index alert dashboard_id" duration=1.174002ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.931765863Z level=info msg="Executing migration" id="Create alert_rule_tag table v1" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.93241783Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v1" duration=653.077µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.936190269Z level=info msg="Executing migration" id="Add unique index alert_rule_tag.alert_id_tag_id" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.937149538Z level=info msg="Migration successfully executed" id="Add unique index alert_rule_tag.alert_id_tag_id" duration=957.839µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.942140929Z level=info msg="Executing migration" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.942984917Z level=info msg="Migration successfully executed" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" duration=844.038µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.948082229Z level=info msg="Executing migration" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.962712056Z level=info msg="Migration successfully executed" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" duration=14.628577ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.966402553Z level=info msg="Executing migration" id="Create alert_rule_tag table v2" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.966891248Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v2" duration=487.725µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.969996351Z level=info msg="Executing migration" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.970662677Z level=info msg="Migration successfully executed" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" duration=666.296µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.974382984Z level=info msg="Executing migration" id="copy alert_rule_tag v1 to v2" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.974860139Z level=info msg="Migration successfully executed" id="copy alert_rule_tag v1 to v2" duration=475.885µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.978632748Z level=info msg="Executing migration" id="drop table alert_rule_tag_v1" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.979535396Z level=info msg="Migration successfully executed" id="drop table alert_rule_tag_v1" duration=896.368µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.985319985Z level=info msg="Executing migration" id="create alert_notification table v1" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.98678631Z level=info msg="Migration successfully executed" id="create alert_notification table v1" duration=1.465885ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.990563368Z level=info msg="Executing migration" id="Add column is_default" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:42.994347096Z level=info msg="Migration successfully executed" id="Add column is_default" duration=3.783408ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.001237206Z level=info msg="Executing migration" id="Add column frequency" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.004800092Z level=info msg="Migration successfully executed" id="Add column frequency" duration=3.561946ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.007978025Z level=info msg="Executing migration" id="Add column send_reminder" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.011533842Z level=info msg="Migration successfully executed" id="Add column send_reminder" duration=3.548607ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.016259241Z level=info msg="Executing migration" id="Add column disable_resolve_message" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.019853197Z level=info msg="Migration successfully executed" id="Add column disable_resolve_message" duration=3.593786ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.023626287Z level=info msg="Executing migration" id="add index alert_notification org_id & name" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.024591026Z level=info msg="Migration successfully executed" id="add index alert_notification org_id & name" duration=962.699µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.027941981Z level=info msg="Executing migration" id="Update alert table charset" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.027970281Z level=info msg="Migration successfully executed" id="Update alert table charset" duration=29.07µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.034245456Z level=info msg="Executing migration" id="Update alert_notification table charset" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.034325197Z level=info msg="Migration successfully executed" id="Update alert_notification table charset" duration=81.441µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.038057815Z level=info msg="Executing migration" id="create notification_journal table v1" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.039304268Z level=info msg="Migration successfully executed" id="create notification_journal table v1" duration=1.246573ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.04622539Z level=info msg="Executing migration" id="add index notification_journal org_id & alert_id & notifier_id" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.047780715Z level=info msg="Migration successfully executed" id="add index notification_journal org_id & alert_id & notifier_id" duration=1.550895ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.051590065Z level=info msg="Executing migration" id="drop alert_notification_journal" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.053105621Z level=info msg="Migration successfully executed" id="drop alert_notification_journal" duration=1.514656ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.056768639Z level=info msg="Executing migration" id="create alert_notification_state table v1" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.057494986Z level=info msg="Migration successfully executed" id="create alert_notification_state table v1" duration=721.457µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.062032922Z level=info msg="Executing migration" id="add index alert_notification_state org_id & alert_id & notifier_id" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.063058414Z level=info msg="Migration successfully executed" id="add index alert_notification_state org_id & alert_id & notifier_id" duration=1.028292ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.06753553Z level=info msg="Executing migration" id="Add for to alert table" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.071262969Z level=info msg="Migration successfully executed" id="Add for to alert table" duration=3.726959ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.074422341Z level=info msg="Executing migration" id="Add column uid in alert_notification" 11:50:08 kafka | [2024-02-21 11:48:11,695] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NonExistentReplica to NewReplica (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,695] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NonExistentReplica to NewReplica (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,695] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NonExistentReplica to NewReplica (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,695] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NonExistentReplica to NewReplica (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,695] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NonExistentReplica to NewReplica (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,695] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NonExistentReplica to NewReplica (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,695] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NonExistentReplica to NewReplica (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,695] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NonExistentReplica to NewReplica (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,695] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NonExistentReplica to NewReplica (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,695] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NonExistentReplica to NewReplica (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,695] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NonExistentReplica to NewReplica (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,695] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NonExistentReplica to NewReplica (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,695] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NonExistentReplica to NewReplica (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,695] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NonExistentReplica to NewReplica (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,695] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NonExistentReplica to NewReplica (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,695] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NonExistentReplica to NewReplica (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,695] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NonExistentReplica to NewReplica (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,695] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NonExistentReplica to NewReplica (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,695] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NonExistentReplica to NewReplica (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,695] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NonExistentReplica to NewReplica (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,695] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NonExistentReplica to NewReplica (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,695] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NonExistentReplica to NewReplica (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,695] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NonExistentReplica to NewReplica (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,695] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,830] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,830] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,830] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,830] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,830] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,830] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,830] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,830] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,830] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,830] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.07822503Z level=info msg="Migration successfully executed" id="Add column uid in alert_notification" duration=3.808209ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.082749107Z level=info msg="Executing migration" id="Update uid column values in alert_notification" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.08298738Z level=info msg="Migration successfully executed" id="Update uid column values in alert_notification" duration=238.093µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.088505087Z level=info msg="Executing migration" id="Add unique index alert_notification_org_id_uid" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.08987559Z level=info msg="Migration successfully executed" id="Add unique index alert_notification_org_id_uid" duration=1.369133ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.107594974Z level=info msg="Executing migration" id="Remove unique index org_id_name" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.109000958Z level=info msg="Migration successfully executed" id="Remove unique index org_id_name" duration=1.406454ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.113367064Z level=info msg="Executing migration" id="Add column secure_settings in alert_notification" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.118040351Z level=info msg="Migration successfully executed" id="Add column secure_settings in alert_notification" duration=4.676767ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.122039393Z level=info msg="Executing migration" id="alter alert.settings to mediumtext" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.122126294Z level=info msg="Migration successfully executed" id="alter alert.settings to mediumtext" duration=87.371µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.128657721Z level=info msg="Executing migration" id="Add non-unique index alert_notification_state_alert_id" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.130512311Z level=info msg="Migration successfully executed" id="Add non-unique index alert_notification_state_alert_id" duration=1.85378ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.134150468Z level=info msg="Executing migration" id="Add non-unique index alert_rule_tag_alert_id" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.135345981Z level=info msg="Migration successfully executed" id="Add non-unique index alert_rule_tag_alert_id" duration=1.190893ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.140662806Z level=info msg="Executing migration" id="Drop old annotation table v4" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.140773457Z level=info msg="Migration successfully executed" id="Drop old annotation table v4" duration=111.141µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.147192453Z level=info msg="Executing migration" id="create annotation table v5" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.148427475Z level=info msg="Migration successfully executed" id="create annotation table v5" duration=1.234982ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.152312836Z level=info msg="Executing migration" id="add index annotation 0 v3" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.153809301Z level=info msg="Migration successfully executed" id="add index annotation 0 v3" duration=1.492835ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.157755012Z level=info msg="Executing migration" id="add index annotation 1 v3" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.159251377Z level=info msg="Migration successfully executed" id="add index annotation 1 v3" duration=1.496305ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.165696524Z level=info msg="Executing migration" id="add index annotation 2 v3" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.166602404Z level=info msg="Migration successfully executed" id="add index annotation 2 v3" duration=905.47µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.170687855Z level=info msg="Executing migration" id="add index annotation 3 v3" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.172285582Z level=info msg="Migration successfully executed" id="add index annotation 3 v3" duration=1.597197ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.176443805Z level=info msg="Executing migration" id="add index annotation 4 v3" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.178140003Z level=info msg="Migration successfully executed" id="add index annotation 4 v3" duration=1.696168ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.184560339Z level=info msg="Executing migration" id="Update annotation table charset" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.184588549Z level=info msg="Migration successfully executed" id="Update annotation table charset" duration=29.14µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.189016975Z level=info msg="Executing migration" id="Add column region_id to annotation table" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.193172557Z level=info msg="Migration successfully executed" id="Add column region_id to annotation table" duration=4.155142ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.199276521Z level=info msg="Executing migration" id="Drop category_id index" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.200354692Z level=info msg="Migration successfully executed" id="Drop category_id index" duration=1.069422ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.205371474Z level=info msg="Executing migration" id="Add column tags to annotation table" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.211521818Z level=info msg="Migration successfully executed" id="Add column tags to annotation table" duration=6.149754ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.216184985Z level=info msg="Executing migration" id="Create annotation_tag table v2" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.216882953Z level=info msg="Migration successfully executed" id="Create annotation_tag table v2" duration=692.918µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.222812624Z level=info msg="Executing migration" id="Add unique index annotation_tag.annotation_id_tag_id" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.224478101Z level=info msg="Migration successfully executed" id="Add unique index annotation_tag.annotation_id_tag_id" duration=1.664157ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.229124839Z level=info msg="Executing migration" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.230540224Z level=info msg="Migration successfully executed" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" duration=1.415335ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.239634998Z level=info msg="Executing migration" id="Rename table annotation_tag to annotation_tag_v2 - v2" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.255623673Z level=info msg="Migration successfully executed" id="Rename table annotation_tag to annotation_tag_v2 - v2" duration=15.988915ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.262205641Z level=info msg="Executing migration" id="Create annotation_tag table v3" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.262673726Z level=info msg="Migration successfully executed" id="Create annotation_tag table v3" duration=468.525µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.266300193Z level=info msg="Executing migration" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.267777629Z level=info msg="Migration successfully executed" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" duration=1.469545ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.272052942Z level=info msg="Executing migration" id="copy annotation_tag v2 to v3" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.272498728Z level=info msg="Migration successfully executed" id="copy annotation_tag v2 to v3" duration=445.816µs 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | > upgrade 0170-jpatoscacapabilityassignment_occurrences.sql 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_occurrences (name VARCHAR(120) NULL, version VARCHAR(20) NULL, OCCURRENCES INT DEFAULT NULL) 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | > upgrade 0180-jpatoscacapabilityassignment_properties.sql 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | > upgrade 0190-jpatoscacapabilitytype_metadata.sql 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilitytype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | > upgrade 0200-jpatoscacapabilitytype_properties.sql 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilitytype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | > upgrade 0210-jpatoscadatatype_constraints.sql 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_constraints (name VARCHAR(120) NULL, version VARCHAR(20) NULL, CONSTRAINTS VARCHAR(255) NULL) 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | > upgrade 0220-jpatoscadatatype_metadata.sql 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | > upgrade 0230-jpatoscadatatype_properties.sql 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | > upgrade 0240-jpatoscanodetemplate_metadata.sql 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetemplate_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | > upgrade 0250-jpatoscanodetemplate_properties.sql 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetemplate_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | > upgrade 0260-jpatoscanodetype_metadata.sql 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | > upgrade 0270-jpatoscanodetype_properties.sql 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | > upgrade 0280-jpatoscapolicy_metadata.sql 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 11:50:08 policy-pap | Waiting for mariadb port 3306... 11:50:08 policy-pap | mariadb (172.17.0.3:3306) open 11:50:08 policy-pap | Waiting for kafka port 9092... 11:50:08 policy-pap | kafka (172.17.0.7:9092) open 11:50:08 policy-pap | Waiting for api port 6969... 11:50:08 policy-pap | api (172.17.0.8:6969) open 11:50:08 policy-pap | Policy pap config file: /opt/app/policy/pap/etc/papParameters.yaml 11:50:08 policy-pap | PDP group configuration file: /opt/app/policy/pap/etc/mounted/groups.json 11:50:08 policy-pap | 11:50:08 policy-pap | . ____ _ __ _ _ 11:50:08 policy-pap | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ 11:50:08 policy-pap | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ 11:50:08 policy-pap | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) 11:50:08 policy-pap | ' |____| .__|_| |_|_| |_\__, | / / / / 11:50:08 policy-pap | =========|_|==============|___/=/_/_/_/ 11:50:08 policy-pap | :: Spring Boot :: (v3.1.8) 11:50:08 policy-pap | 11:50:08 policy-pap | [2024-02-21T11:48:01.204+00:00|INFO|PolicyPapApplication|main] Starting PolicyPapApplication using Java 17.0.10 with PID 33 (/app/pap.jar started by policy in /opt/app/policy/pap/bin) 11:50:08 policy-pap | [2024-02-21T11:48:01.206+00:00|INFO|PolicyPapApplication|main] No active profile set, falling back to 1 default profile: "default" 11:50:08 policy-pap | [2024-02-21T11:48:03.158+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. 11:50:08 policy-pap | [2024-02-21T11:48:03.264+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 96 ms. Found 7 JPA repository interfaces. 11:50:08 policy-pap | [2024-02-21T11:48:03.679+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.exception.ServiceExceptionHandler 11:50:08 policy-pap | [2024-02-21T11:48:03.680+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.exception.ServiceExceptionHandler 11:50:08 policy-pap | [2024-02-21T11:48:04.402+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port(s): 6969 (http) 11:50:08 policy-pap | [2024-02-21T11:48:04.412+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] 11:50:08 policy-pap | [2024-02-21T11:48:04.417+00:00|INFO|StandardService|main] Starting service [Tomcat] 11:50:08 policy-pap | [2024-02-21T11:48:04.418+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.18] 11:50:08 policy-pap | [2024-02-21T11:48:04.519+00:00|INFO|[/policy/pap/v1]|main] Initializing Spring embedded WebApplicationContext 11:50:08 policy-pap | [2024-02-21T11:48:04.519+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 3231 ms 11:50:08 policy-pap | [2024-02-21T11:48:05.000+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] 11:50:08 policy-pap | [2024-02-21T11:48:05.098+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.3.0.CR1 11:50:08 policy-pap | [2024-02-21T11:48:05.101+00:00|INFO|Environment|main] HHH000406: Using bytecode reflection optimizer 11:50:08 policy-pap | [2024-02-21T11:48:05.156+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled 11:50:08 policy-pap | [2024-02-21T11:48:05.542+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer 11:50:08 policy-pap | [2024-02-21T11:48:05.565+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... 11:50:08 policy-pap | [2024-02-21T11:48:05.682+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.mariadb.jdbc.Connection@36a6bea6 11:50:08 policy-pap | [2024-02-21T11:48:05.685+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. 11:50:08 policy-pap | [2024-02-21T11:48:05.718+00:00|WARN|deprecation|main] HHH90000025: MariaDB103Dialect does not need to be specified explicitly using 'hibernate.dialect' (remove the property setting and it will be selected by default) 11:50:08 policy-pap | [2024-02-21T11:48:05.720+00:00|WARN|deprecation|main] HHH90000026: MariaDB103Dialect has been deprecated; use org.hibernate.dialect.MariaDBDialect instead 11:50:08 policy-pap | [2024-02-21T11:48:07.780+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) 11:50:08 policy-pap | [2024-02-21T11:48:07.784+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' 11:50:08 policy-pap | [2024-02-21T11:48:08.292+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PdpGroupRepository 11:50:08 policy-pap | [2024-02-21T11:48:08.771+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PolicyStatusRepository 11:50:08 policy-pap | [2024-02-21T11:48:08.867+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PolicyAuditRepository 11:50:08 policy-pap | [2024-02-21T11:48:09.140+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 11:50:08 policy-pap | allow.auto.create.topics = true 11:50:08 policy-pap | auto.commit.interval.ms = 5000 11:50:08 policy-pap | auto.include.jmx.reporter = true 11:50:08 policy-pap | auto.offset.reset = latest 11:50:08 policy-pap | bootstrap.servers = [kafka:9092] 11:50:08 policy-pap | check.crcs = true 11:50:08 policy-pap | client.dns.lookup = use_all_dns_ips 11:50:08 policy-pap | client.id = consumer-deda7b7f-e78f-4ba0-9889-983a637c2ccd-1 11:50:08 policy-pap | client.rack = 11:50:08 policy-pap | connections.max.idle.ms = 540000 11:50:08 policy-pap | default.api.timeout.ms = 60000 11:50:08 policy-pap | enable.auto.commit = true 11:50:08 policy-pap | exclude.internal.topics = true 11:50:08 policy-pap | fetch.max.bytes = 52428800 11:50:08 policy-pap | fetch.max.wait.ms = 500 11:50:08 policy-pap | fetch.min.bytes = 1 11:50:08 policy-pap | group.id = deda7b7f-e78f-4ba0-9889-983a637c2ccd 11:50:08 policy-pap | group.instance.id = null 11:50:08 policy-pap | heartbeat.interval.ms = 3000 11:50:08 policy-pap | interceptor.classes = [] 11:50:08 policy-pap | internal.leave.group.on.close = true 11:50:08 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 11:50:08 policy-pap | isolation.level = read_uncommitted 11:50:08 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 11:50:08 policy-pap | max.partition.fetch.bytes = 1048576 11:50:08 policy-pap | max.poll.interval.ms = 300000 11:50:08 policy-pap | max.poll.records = 500 11:50:08 policy-pap | metadata.max.age.ms = 300000 11:50:08 policy-pap | metric.reporters = [] 11:50:08 policy-pap | metrics.num.samples = 2 11:50:08 policy-pap | metrics.recording.level = INFO 11:50:08 policy-pap | metrics.sample.window.ms = 30000 11:50:08 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 11:50:08 policy-pap | receive.buffer.bytes = 65536 11:50:08 policy-pap | reconnect.backoff.max.ms = 1000 11:50:08 policy-pap | reconnect.backoff.ms = 50 11:50:08 policy-pap | request.timeout.ms = 30000 11:50:08 policy-pap | retry.backoff.ms = 100 11:50:08 policy-pap | sasl.client.callback.handler.class = null 11:50:08 policy-pap | sasl.jaas.config = null 11:50:08 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 11:50:08 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 11:50:08 policy-pap | sasl.kerberos.service.name = null 11:50:08 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 11:50:08 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 11:50:08 policy-pap | sasl.login.callback.handler.class = null 11:50:08 policy-pap | sasl.login.class = null 11:50:08 policy-pap | sasl.login.connect.timeout.ms = null 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.278034115Z level=info msg="Executing migration" id="drop table annotation_tag_v2" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.27856061Z level=info msg="Migration successfully executed" id="drop table annotation_tag_v2" duration=526.535µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.283545862Z level=info msg="Executing migration" id="Update alert annotations and set TEXT to empty" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.283846675Z level=info msg="Migration successfully executed" id="Update alert annotations and set TEXT to empty" duration=293.243µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.288211989Z level=info msg="Executing migration" id="Add created time to annotation table" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.293548365Z level=info msg="Migration successfully executed" id="Add created time to annotation table" duration=5.337576ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.300054191Z level=info msg="Executing migration" id="Add updated time to annotation table" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.304041263Z level=info msg="Migration successfully executed" id="Add updated time to annotation table" duration=3.986782ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.31246924Z level=info msg="Executing migration" id="Add index for created in annotation table" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.313387829Z level=info msg="Migration successfully executed" id="Add index for created in annotation table" duration=918.339µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.320020219Z level=info msg="Executing migration" id="Add index for updated in annotation table" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.321482523Z level=info msg="Migration successfully executed" id="Add index for updated in annotation table" duration=1.466374ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.328143352Z level=info msg="Executing migration" id="Convert existing annotations from seconds to milliseconds" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.328507976Z level=info msg="Migration successfully executed" id="Convert existing annotations from seconds to milliseconds" duration=364.954µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.336191295Z level=info msg="Executing migration" id="Add epoch_end column" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.342918345Z level=info msg="Migration successfully executed" id="Add epoch_end column" duration=6.72613ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.349401852Z level=info msg="Executing migration" id="Add index for epoch_end" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.350334131Z level=info msg="Migration successfully executed" id="Add index for epoch_end" duration=928.809µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.355574835Z level=info msg="Executing migration" id="Make epoch_end the same as epoch" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.355902419Z level=info msg="Migration successfully executed" id="Make epoch_end the same as epoch" duration=328.254µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.360817259Z level=info msg="Executing migration" id="Move region to single row" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.361516096Z level=info msg="Migration successfully executed" id="Move region to single row" duration=698.347µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.369462969Z level=info msg="Executing migration" id="Remove index org_id_epoch from annotation table" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.371235907Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch from annotation table" duration=1.778288ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.377088678Z level=info msg="Executing migration" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.378823246Z level=info msg="Migration successfully executed" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" duration=1.738868ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.384807278Z level=info msg="Executing migration" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.385767317Z level=info msg="Migration successfully executed" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" duration=959.389µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.39470614Z level=info msg="Executing migration" id="Add index for org_id_epoch_end_epoch on annotation table" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.396313146Z level=info msg="Migration successfully executed" id="Add index for org_id_epoch_end_epoch on annotation table" duration=1.606456ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.400206566Z level=info msg="Executing migration" id="Remove index org_id_epoch_epoch_end from annotation table" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.401684042Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch_epoch_end from annotation table" duration=1.476936ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.406862366Z level=info msg="Executing migration" id="Add index for alert_id on annotation table" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.407992797Z level=info msg="Migration successfully executed" id="Add index for alert_id on annotation table" duration=1.130351ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.412657166Z level=info msg="Executing migration" id="Increase tags column to length 4096" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.412795357Z level=info msg="Migration successfully executed" id="Increase tags column to length 4096" duration=133.641µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.417823398Z level=info msg="Executing migration" id="create test_data table" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.419113992Z level=info msg="Migration successfully executed" id="create test_data table" duration=1.290234ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.42371589Z level=info msg="Executing migration" id="create dashboard_version table v1" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.424462767Z level=info msg="Migration successfully executed" id="create dashboard_version table v1" duration=746.677µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.524220688Z level=info msg="Executing migration" id="add index dashboard_version.dashboard_id" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.526054767Z level=info msg="Migration successfully executed" id="add index dashboard_version.dashboard_id" duration=1.8374ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.532994699Z level=info msg="Executing migration" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.534031159Z level=info msg="Migration successfully executed" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" duration=1.03619ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.539824729Z level=info msg="Executing migration" id="Set dashboard version to 1 where 0" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.540643847Z level=info msg="Migration successfully executed" id="Set dashboard version to 1 where 0" duration=818.788µs 11:50:08 policy-pap | sasl.login.read.timeout.ms = null 11:50:08 policy-pap | sasl.login.refresh.buffer.seconds = 300 11:50:08 policy-pap | sasl.login.refresh.min.period.seconds = 60 11:50:08 policy-pap | sasl.login.refresh.window.factor = 0.8 11:50:08 policy-pap | sasl.login.refresh.window.jitter = 0.05 11:50:08 policy-pap | sasl.login.retry.backoff.max.ms = 10000 11:50:08 policy-pap | sasl.login.retry.backoff.ms = 100 11:50:08 policy-pap | sasl.mechanism = GSSAPI 11:50:08 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 11:50:08 policy-pap | sasl.oauthbearer.expected.audience = null 11:50:08 policy-pap | sasl.oauthbearer.expected.issuer = null 11:50:08 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 11:50:08 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 11:50:08 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 11:50:08 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 11:50:08 policy-pap | sasl.oauthbearer.scope.claim.name = scope 11:50:08 policy-pap | sasl.oauthbearer.sub.claim.name = sub 11:50:08 policy-pap | sasl.oauthbearer.token.endpoint.url = null 11:50:08 policy-pap | security.protocol = PLAINTEXT 11:50:08 policy-pap | security.providers = null 11:50:08 policy-pap | send.buffer.bytes = 131072 11:50:08 policy-pap | session.timeout.ms = 45000 11:50:08 policy-pap | socket.connection.setup.timeout.max.ms = 30000 11:50:08 policy-pap | socket.connection.setup.timeout.ms = 10000 11:50:08 policy-pap | ssl.cipher.suites = null 11:50:08 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 11:50:08 policy-pap | ssl.endpoint.identification.algorithm = https 11:50:08 policy-pap | ssl.engine.factory.class = null 11:50:08 policy-pap | ssl.key.password = null 11:50:08 policy-pap | ssl.keymanager.algorithm = SunX509 11:50:08 policy-pap | ssl.keystore.certificate.chain = null 11:50:08 policy-pap | ssl.keystore.key = null 11:50:08 policy-pap | ssl.keystore.location = null 11:50:08 policy-pap | ssl.keystore.password = null 11:50:08 policy-pap | ssl.keystore.type = JKS 11:50:08 policy-pap | ssl.protocol = TLSv1.3 11:50:08 policy-pap | ssl.provider = null 11:50:08 policy-pap | ssl.secure.random.implementation = null 11:50:08 policy-pap | ssl.trustmanager.algorithm = PKIX 11:50:08 policy-pap | ssl.truststore.certificates = null 11:50:08 policy-pap | ssl.truststore.location = null 11:50:08 policy-pap | ssl.truststore.password = null 11:50:08 policy-pap | ssl.truststore.type = JKS 11:50:08 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 11:50:08 policy-pap | 11:50:08 policy-pap | [2024-02-21T11:48:09.321+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 11:50:08 policy-pap | [2024-02-21T11:48:09.321+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 11:50:08 policy-pap | [2024-02-21T11:48:09.321+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1708516089319 11:50:08 policy-pap | [2024-02-21T11:48:09.324+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-deda7b7f-e78f-4ba0-9889-983a637c2ccd-1, groupId=deda7b7f-e78f-4ba0-9889-983a637c2ccd] Subscribed to topic(s): policy-pdp-pap 11:50:08 policy-pap | [2024-02-21T11:48:09.324+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 11:50:08 policy-pap | allow.auto.create.topics = true 11:50:08 policy-pap | auto.commit.interval.ms = 5000 11:50:08 policy-pap | auto.include.jmx.reporter = true 11:50:08 policy-pap | auto.offset.reset = latest 11:50:08 policy-pap | bootstrap.servers = [kafka:9092] 11:50:08 policy-pap | check.crcs = true 11:50:08 policy-pap | client.dns.lookup = use_all_dns_ips 11:50:08 policy-pap | client.id = consumer-policy-pap-2 11:50:08 policy-pap | client.rack = 11:50:08 policy-pap | connections.max.idle.ms = 540000 11:50:08 policy-pap | default.api.timeout.ms = 60000 11:50:08 policy-pap | enable.auto.commit = true 11:50:08 policy-pap | exclude.internal.topics = true 11:50:08 policy-pap | fetch.max.bytes = 52428800 11:50:08 policy-pap | fetch.max.wait.ms = 500 11:50:08 policy-pap | fetch.min.bytes = 1 11:50:08 policy-pap | group.id = policy-pap 11:50:08 policy-pap | group.instance.id = null 11:50:08 policy-pap | heartbeat.interval.ms = 3000 11:50:08 policy-pap | interceptor.classes = [] 11:50:08 policy-pap | internal.leave.group.on.close = true 11:50:08 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 11:50:08 policy-pap | isolation.level = read_uncommitted 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.547828392Z level=info msg="Executing migration" id="save existing dashboard data in dashboard_version table v1" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.548283077Z level=info msg="Migration successfully executed" id="save existing dashboard data in dashboard_version table v1" duration=454.414µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.554593781Z level=info msg="Executing migration" id="alter dashboard_version.data to mediumtext v1" 11:50:08 kafka | [2024-02-21 11:48:11,830] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,830] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,830] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,830] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,830] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,830] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,830] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,830] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | > upgrade 0290-jpatoscapolicy_properties.sql 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | > upgrade 0300-jpatoscapolicy_targets.sql 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_targets (name VARCHAR(120) NULL, version VARCHAR(20) NULL) 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | > upgrade 0310-jpatoscapolicytype_metadata.sql 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | > upgrade 0320-jpatoscapolicytype_properties.sql 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | > upgrade 0330-jpatoscapolicytype_targets.sql 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_targets (name VARCHAR(120) NULL, version VARCHAR(20) NULL) 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | > upgrade 0340-jpatoscapolicytype_triggers.sql 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_triggers (name VARCHAR(120) NULL, version VARCHAR(20) NULL, TRIGGERS VARCHAR(255) NULL) 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | > upgrade 0350-jpatoscaproperty_constraints.sql 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaproperty_constraints (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, CONSTRAINTS VARCHAR(255) NULL) 11:50:08 policy-db-migrator | -------------- 11:50:08 kafka | [2024-02-21 11:48:11,830] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,830] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,831] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,831] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,831] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,831] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,831] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,831] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,831] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,831] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,831] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,831] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,831] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,831] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,831] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,831] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,831] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,831] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,831] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,831] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,831] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,831] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,831] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,831] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:50:08 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 11:50:08 policy-pap | max.partition.fetch.bytes = 1048576 11:50:08 policy-pap | max.poll.interval.ms = 300000 11:50:08 policy-pap | max.poll.records = 500 11:50:08 policy-pap | metadata.max.age.ms = 300000 11:50:08 policy-pap | metric.reporters = [] 11:50:08 policy-pap | metrics.num.samples = 2 11:50:08 policy-pap | metrics.recording.level = INFO 11:50:08 policy-pap | metrics.sample.window.ms = 30000 11:50:08 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 11:50:08 policy-pap | receive.buffer.bytes = 65536 11:50:08 policy-pap | reconnect.backoff.max.ms = 1000 11:50:08 policy-pap | reconnect.backoff.ms = 50 11:50:08 policy-pap | request.timeout.ms = 30000 11:50:08 policy-pap | retry.backoff.ms = 100 11:50:08 policy-pap | sasl.client.callback.handler.class = null 11:50:08 policy-pap | sasl.jaas.config = null 11:50:08 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 11:50:08 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 11:50:08 policy-pap | sasl.kerberos.service.name = null 11:50:08 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 11:50:08 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 11:50:08 policy-pap | sasl.login.callback.handler.class = null 11:50:08 policy-pap | sasl.login.class = null 11:50:08 policy-pap | sasl.login.connect.timeout.ms = null 11:50:08 policy-pap | sasl.login.read.timeout.ms = null 11:50:08 policy-pap | sasl.login.refresh.buffer.seconds = 300 11:50:08 policy-pap | sasl.login.refresh.min.period.seconds = 60 11:50:08 policy-pap | sasl.login.refresh.window.factor = 0.8 11:50:08 policy-pap | sasl.login.refresh.window.jitter = 0.05 11:50:08 policy-pap | sasl.login.retry.backoff.max.ms = 10000 11:50:08 policy-pap | sasl.login.retry.backoff.ms = 100 11:50:08 policy-pap | sasl.mechanism = GSSAPI 11:50:08 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 11:50:08 policy-pap | sasl.oauthbearer.expected.audience = null 11:50:08 policy-pap | sasl.oauthbearer.expected.issuer = null 11:50:08 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 11:50:08 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 11:50:08 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 11:50:08 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 11:50:08 policy-pap | sasl.oauthbearer.scope.claim.name = scope 11:50:08 kafka | [2024-02-21 11:48:11,831] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,831] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,831] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,831] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,831] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,831] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,832] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,832] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,832] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,834] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-13 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,834] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-46 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,834] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-9 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,834] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-42 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,834] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-21 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,834] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-17 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,834] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-30 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,834] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-26 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,834] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-5 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,834] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-38 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,834] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-1 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,834] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-34 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,834] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-16 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,835] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-45 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,835] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-12 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,835] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-41 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,835] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-24 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,835] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-20 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,835] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-49 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,835] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-0 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,835] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-29 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,835] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-25 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,835] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-8 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,835] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-37 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,835] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-4 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,835] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-33 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,835] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-15 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,835] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-48 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,835] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-11 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,835] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-44 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,835] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-23 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,835] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-19 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,835] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-32 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,835] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-28 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,835] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-7 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,835] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-40 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,835] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-3 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,835] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-36 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,835] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-47 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,836] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-14 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,836] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-43 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,836] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-10 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,836] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-22 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,836] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-18 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,836] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-31 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,836] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-27 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,836] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-39 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,836] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-6 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,836] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-35 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,836] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition policy-pdp-pap-0 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,836] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-2 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,837] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 51 become-leader and 0 become-follower partitions (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,839] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 51 partitions (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,840] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NewReplica to OnlineReplica (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,840] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NewReplica to OnlineReplica (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,840] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NewReplica to OnlineReplica (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,840] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NewReplica to OnlineReplica (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,840] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NewReplica to OnlineReplica (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,840] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NewReplica to OnlineReplica (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,840] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NewReplica to OnlineReplica (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,840] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NewReplica to OnlineReplica (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,840] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NewReplica to OnlineReplica (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,840] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NewReplica to OnlineReplica (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,840] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NewReplica to OnlineReplica (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,841] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NewReplica to OnlineReplica (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,841] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NewReplica to OnlineReplica (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,841] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NewReplica to OnlineReplica (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,841] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NewReplica to OnlineReplica (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,841] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NewReplica to OnlineReplica (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,841] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NewReplica to OnlineReplica (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,841] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NewReplica to OnlineReplica (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,841] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NewReplica to OnlineReplica (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,841] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NewReplica to OnlineReplica (state.change.logger) 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.554836624Z level=info msg="Migration successfully executed" id="alter dashboard_version.data to mediumtext v1" duration=242.473µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.562846587Z level=info msg="Executing migration" id="create team table" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.56414499Z level=info msg="Migration successfully executed" id="create team table" duration=1.298323ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.569676078Z level=info msg="Executing migration" id="add index team.org_id" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.571490116Z level=info msg="Migration successfully executed" id="add index team.org_id" duration=1.815038ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.57671277Z level=info msg="Executing migration" id="add unique index team_org_id_name" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.578299287Z level=info msg="Migration successfully executed" id="add unique index team_org_id_name" duration=1.586337ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.583527591Z level=info msg="Executing migration" id="Add column uid in team" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.588241699Z level=info msg="Migration successfully executed" id="Add column uid in team" duration=4.713308ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.595207051Z level=info msg="Executing migration" id="Update uid column values in team" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.595673526Z level=info msg="Migration successfully executed" id="Update uid column values in team" duration=467.695µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.600459556Z level=info msg="Executing migration" id="Add unique index team_org_id_uid" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.602260784Z level=info msg="Migration successfully executed" id="Add unique index team_org_id_uid" duration=1.800908ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.607311786Z level=info msg="Executing migration" id="create team member table" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.608139374Z level=info msg="Migration successfully executed" id="create team member table" duration=827.598µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.615545401Z level=info msg="Executing migration" id="add index team_member.org_id" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.617603093Z level=info msg="Migration successfully executed" id="add index team_member.org_id" duration=2.062292ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.625614316Z level=info msg="Executing migration" id="add unique index team_member_org_id_team_id_user_id" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.626939339Z level=info msg="Migration successfully executed" id="add unique index team_member_org_id_team_id_user_id" duration=1.325353ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.630511066Z level=info msg="Executing migration" id="add index team_member.team_id" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.632061612Z level=info msg="Migration successfully executed" id="add index team_member.team_id" duration=1.550486ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.638844472Z level=info msg="Executing migration" id="Add column email to team table" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.643886574Z level=info msg="Migration successfully executed" id="Add column email to team table" duration=5.040702ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.651065058Z level=info msg="Executing migration" id="Add column external to team_member table" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.655845457Z level=info msg="Migration successfully executed" id="Add column external to team_member table" duration=4.780329ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.660830899Z level=info msg="Executing migration" id="Add column permission to team_member table" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.665844301Z level=info msg="Migration successfully executed" id="Add column permission to team_member table" duration=5.013352ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.66969495Z level=info msg="Executing migration" id="create dashboard acl table" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.670819173Z level=info msg="Migration successfully executed" id="create dashboard acl table" duration=1.123812ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.675840395Z level=info msg="Executing migration" id="add index dashboard_acl_dashboard_id" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.676823194Z level=info msg="Migration successfully executed" id="add index dashboard_acl_dashboard_id" duration=982.719µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.680568933Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_user_id" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.681684605Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_user_id" duration=1.109402ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.687139801Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_team_id" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.688230912Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_team_id" duration=1.091091ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.696094964Z level=info msg="Executing migration" id="add index dashboard_acl_user_id" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.697639759Z level=info msg="Migration successfully executed" id="add index dashboard_acl_user_id" duration=1.552546ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.702939494Z level=info msg="Executing migration" id="add index dashboard_acl_team_id" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.704664842Z level=info msg="Migration successfully executed" id="add index dashboard_acl_team_id" duration=1.732478ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.708650403Z level=info msg="Executing migration" id="add index dashboard_acl_org_id_role" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.709633904Z level=info msg="Migration successfully executed" id="add index dashboard_acl_org_id_role" duration=983.43µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.7161192Z level=info msg="Executing migration" id="add index dashboard_permission" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.717873309Z level=info msg="Migration successfully executed" id="add index dashboard_permission" duration=1.753759ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.722120332Z level=info msg="Executing migration" id="save default acl rules in dashboard_acl table" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.723188763Z level=info msg="Migration successfully executed" id="save default acl rules in dashboard_acl table" duration=1.067841ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.729086125Z level=info msg="Executing migration" id="delete acl rules for deleted dashboards and folders" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.729468379Z level=info msg="Migration successfully executed" id="delete acl rules for deleted dashboards and folders" duration=382.274µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.735520061Z level=info msg="Executing migration" id="create tag table" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.736878214Z level=info msg="Migration successfully executed" id="create tag table" duration=1.353353ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.741338241Z level=info msg="Executing migration" id="add index tag.key_value" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.743300051Z level=info msg="Migration successfully executed" id="add index tag.key_value" duration=1.9657ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.749414164Z level=info msg="Executing migration" id="create login attempt table" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.750374094Z level=info msg="Migration successfully executed" id="create login attempt table" duration=959.33µs 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | > upgrade 0360-jpatoscaproperty_metadata.sql 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaproperty_metadata (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | > upgrade 0370-jpatoscarelationshiptype_metadata.sql 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarelationshiptype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | > upgrade 0380-jpatoscarelationshiptype_properties.sql 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarelationshiptype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | > upgrade 0390-jpatoscarequirement_metadata.sql 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | > upgrade 0400-jpatoscarequirement_occurrences.sql 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_occurrences (name VARCHAR(120) NULL, version VARCHAR(20) NULL, OCCURRENCES INT DEFAULT NULL) 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | > upgrade 0410-jpatoscarequirement_properties.sql 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | > upgrade 0420-jpatoscaservicetemplate_metadata.sql 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaservicetemplate_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | > upgrade 0430-jpatoscatopologytemplate_inputs.sql 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscatopologytemplate_inputs (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, INPUTS LONGBLOB DEFAULT NULL, INPUTS_KEY VARCHAR(255) NULL) 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | > upgrade 0440-pdpgroup_pdpsubgroup.sql 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpgroup_pdpsubgroup (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPGROUP_PDPSUBGROUP (name, version, parentLocalName, localName, parentKeyVersion, parentKeyName)) 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | > upgrade 0450-pdpgroup.sql 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpgroup (`DESCRIPTION` VARCHAR(255) NULL, PDPGROUPSTATE INT DEFAULT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_PDPGROUP (name, version)) 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | > upgrade 0460-pdppolicystatus.sql 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdppolicystatus (DEPLOY BOOLEAN DEFAULT 0, PDPGROUP VARCHAR(255) DEFAULT NULL, PDPTYPE VARCHAR(255) DEFAULT NULL, STATE INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PRIMARY KEY PK_PDPPOLICYSTATUS (parentLocalName, localName, parentKeyVersion, parentKeyName)) 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | > upgrade 0470-pdp.sql 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdp (HEALTHY INT DEFAULT NULL, MESSAGE VARCHAR(255) DEFAULT NULL, PDPSTATE INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDP (parentLocalName, localName, parentKeyVersion, parentKeyName)) 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | > upgrade 0480-pdpstatistics.sql 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpstatistics (PDPGROUPNAME VARCHAR(120) NULL, PDPSUBGROUPNAME VARCHAR(120) NULL, POLICYDEPLOYCOUNT BIGINT DEFAULT NULL, POLICYDEPLOYFAILCOUNT BIGINT DEFAULT NULL, POLICYDEPLOYSUCCESSCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDFAILCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDSUCCESSCOUNT BIGINT DEFAULT NULL, timeStamp datetime NOT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_PDPSTATISTICS (timeStamp, name, version)) 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | > upgrade 0490-pdpsubgroup_pdp.sql 11:50:08 policy-db-migrator | -------------- 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.755903111Z level=info msg="Executing migration" id="add index login_attempt.username" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.75767177Z level=info msg="Migration successfully executed" id="add index login_attempt.username" duration=1.768229ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.762208576Z level=info msg="Executing migration" id="drop index IDX_login_attempt_username - v1" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.763316138Z level=info msg="Migration successfully executed" id="drop index IDX_login_attempt_username - v1" duration=1.102952ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.768512692Z level=info msg="Executing migration" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.787562528Z level=info msg="Migration successfully executed" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" duration=19.051046ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.790897993Z level=info msg="Executing migration" id="create login_attempt v2" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.791531749Z level=info msg="Migration successfully executed" id="create login_attempt v2" duration=635.676µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.79638254Z level=info msg="Executing migration" id="create index IDX_login_attempt_username - v2" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.798218868Z level=info msg="Migration successfully executed" id="create index IDX_login_attempt_username - v2" duration=1.828198ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.807548725Z level=info msg="Executing migration" id="copy login_attempt v1 to v2" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.8080183Z level=info msg="Migration successfully executed" id="copy login_attempt v1 to v2" duration=469.465µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.813934201Z level=info msg="Executing migration" id="drop login_attempt_tmp_qwerty" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.81488396Z level=info msg="Migration successfully executed" id="drop login_attempt_tmp_qwerty" duration=949.349µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.819533069Z level=info msg="Executing migration" id="create user auth table" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.820468548Z level=info msg="Migration successfully executed" id="create user auth table" duration=935.249µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.825986675Z level=info msg="Executing migration" id="create index IDX_user_auth_auth_module_auth_id - v1" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.827729843Z level=info msg="Migration successfully executed" id="create index IDX_user_auth_auth_module_auth_id - v1" duration=1.742928ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.832706545Z level=info msg="Executing migration" id="alter user_auth.auth_id to length 190" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.832817666Z level=info msg="Migration successfully executed" id="alter user_auth.auth_id to length 190" duration=116.621µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.837593725Z level=info msg="Executing migration" id="Add OAuth access token to user_auth" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.843104652Z level=info msg="Migration successfully executed" id="Add OAuth access token to user_auth" duration=5.510397ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.851127295Z level=info msg="Executing migration" id="Add OAuth refresh token to user_auth" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.856741293Z level=info msg="Migration successfully executed" id="Add OAuth refresh token to user_auth" duration=5.616308ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.859972836Z level=info msg="Executing migration" id="Add OAuth token type to user_auth" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.865135119Z level=info msg="Migration successfully executed" id="Add OAuth token type to user_auth" duration=5.161093ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.870164602Z level=info msg="Executing migration" id="Add OAuth expiry to user_auth" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.875413286Z level=info msg="Migration successfully executed" id="Add OAuth expiry to user_auth" duration=5.247904ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.884931614Z level=info msg="Executing migration" id="Add index to user_id column in user_auth" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.8864941Z level=info msg="Migration successfully executed" id="Add index to user_id column in user_auth" duration=1.562696ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.892485582Z level=info msg="Executing migration" id="Add OAuth ID token to user_auth" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.8980654Z level=info msg="Migration successfully executed" id="Add OAuth ID token to user_auth" duration=5.580638ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.902762409Z level=info msg="Executing migration" id="create server_lock table" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.903698028Z level=info msg="Migration successfully executed" id="create server_lock table" duration=937.699µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.90874108Z level=info msg="Executing migration" id="add index server_lock.operation_uid" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.909822401Z level=info msg="Migration successfully executed" id="add index server_lock.operation_uid" duration=1.079711ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.913893474Z level=info msg="Executing migration" id="create user auth token table" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.916252467Z level=info msg="Migration successfully executed" id="create user auth token table" duration=2.358303ms 11:50:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpsubgroup_pdp (pdpParentKeyName VARCHAR(120) NOT NULL, pdpParentKeyVersion VARCHAR(15) NOT NULL, pdpParentLocalName VARCHAR(120) NOT NULL, pdpLocalName VARCHAR(120) NOT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPSUBGROUP_PDP (pdpParentKeyName, pdpParentKeyVersion, pdpParentLocalName, pdpLocalName, parentLocalName, localName, parentKeyVersion, parentKeyName)) 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | > upgrade 0500-pdpsubgroup.sql 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpsubgroup (CURRENTINSTANCECOUNT INT DEFAULT NULL, DESIREDINSTANCECOUNT INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPSUBGROUP (parentLocalName, localName, parentKeyVersion, parentKeyName)) 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | > upgrade 0510-toscacapabilityassignment.sql 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignment (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENT(name, version)) 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | > upgrade 0520-toscacapabilityassignments.sql 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignments (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENTS (name, version)) 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | > upgrade 0530-toscacapabilityassignments_toscacapabilityassignment.sql 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignments_toscacapabilityassignment (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENTS_TOSCACAPABILITYASSIGNMENT (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | > upgrade 0540-toscacapabilitytype.sql 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPE (name, version)) 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | > upgrade 0550-toscacapabilitytypes.sql 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPES (name, version)) 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | > upgrade 0560-toscacapabilitytypes_toscacapabilitytype.sql 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytypes_toscacapabilitytype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPES_TOSCACAPABILITYTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | > upgrade 0570-toscadatatype.sql 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCADATATYPE (name, version)) 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | > upgrade 0580-toscadatatypes.sql 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCADATATYPES (name, version)) 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | > upgrade 0590-toscadatatypes_toscadatatype.sql 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatypes_toscadatatype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCADATATYPES_TOSCADATATYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | > upgrade 0600-toscanodetemplate.sql 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplate (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, capabilitiesName VARCHAR(120) NULL, capabilitiesVersion VARCHAR(20) NULL, requirementsVersion VARCHAR(20) NULL, requirementsName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCANODETEMPLATE (name, version)) 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | > upgrade 0610-toscanodetemplates.sql 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplates (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCANODETEMPLATES (name, version)) 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.921638043Z level=info msg="Executing migration" id="add unique index user_auth_token.auth_token" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.922664234Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.auth_token" duration=1.025881ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.931657967Z level=info msg="Executing migration" id="add unique index user_auth_token.prev_auth_token" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.933705788Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.prev_auth_token" duration=2.048011ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.941017323Z level=info msg="Executing migration" id="add index user_auth_token.user_id" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.942241166Z level=info msg="Migration successfully executed" id="add index user_auth_token.user_id" duration=1.224713ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.946817583Z level=info msg="Executing migration" id="Add revoked_at to the user auth token" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.95234489Z level=info msg="Migration successfully executed" id="Add revoked_at to the user auth token" duration=5.527067ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.960978979Z level=info msg="Executing migration" id="add index user_auth_token.revoked_at" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.962408185Z level=info msg="Migration successfully executed" id="add index user_auth_token.revoked_at" duration=1.427246ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.970114064Z level=info msg="Executing migration" id="create cache_data table" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.971585829Z level=info msg="Migration successfully executed" id="create cache_data table" duration=1.470925ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.975625682Z level=info msg="Executing migration" id="add unique index cache_data.cache_key" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.977371099Z level=info msg="Migration successfully executed" id="add unique index cache_data.cache_key" duration=1.745658ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.982767575Z level=info msg="Executing migration" id="create short_url table v1" 11:50:08 simulator | Policy simulator config file: /opt/app/policy/simulators/etc/mounted/simParameters.json 11:50:08 simulator | overriding logback.xml 11:50:08 simulator | 2024-02-21 11:47:39,066 INFO replacing 'HOST_NAME' with simulator in /opt/app/policy/simulators/etc/mounted/simParameters.json 11:50:08 simulator | 2024-02-21 11:47:39,123 INFO org.onap.policy.models.simulators starting 11:50:08 simulator | 2024-02-21 11:47:39,124 INFO org.onap.policy.models.simulators starting CDS gRPC Server Properties 11:50:08 simulator | 2024-02-21 11:47:39,302 INFO org.onap.policy.models.simulators starting org.onap.policy.simulators.AaiSimulatorJaxRs_RESOURCE_LOCATION 11:50:08 simulator | 2024-02-21 11:47:39,303 INFO org.onap.policy.models.simulators starting A&AI simulator 11:50:08 simulator | 2024-02-21 11:47:39,420 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-5c18016b==org.glassfish.jersey.servlet.ServletContainer@266aec30{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@45905bff{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@2a2c13a8{/,null,STOPPED}, connector=A&AI simulator@54a67a45{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-5c18016b==org.glassfish.jersey.servlet.ServletContainer@266aec30{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 11:50:08 simulator | 2024-02-21 11:47:39,429 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-5c18016b==org.glassfish.jersey.servlet.ServletContainer@266aec30{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@45905bff{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@2a2c13a8{/,null,STOPPED}, connector=A&AI simulator@54a67a45{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-5c18016b==org.glassfish.jersey.servlet.ServletContainer@266aec30{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 11:50:08 simulator | 2024-02-21 11:47:39,432 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-5c18016b==org.glassfish.jersey.servlet.ServletContainer@266aec30{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@45905bff{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@2a2c13a8{/,null,STOPPED}, connector=A&AI simulator@54a67a45{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=Thread[A&AI simulator-6666,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-5c18016b==org.glassfish.jersey.servlet.ServletContainer@266aec30{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 11:50:08 simulator | 2024-02-21 11:47:39,436 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.10+7-alpine-r0 11:50:08 simulator | 2024-02-21 11:47:39,490 INFO Session workerName=node0 11:50:08 simulator | 2024-02-21 11:47:39,982 INFO Using GSON for REST calls 11:50:08 simulator | 2024-02-21 11:47:40,075 INFO Started o.e.j.s.ServletContextHandler@2a2c13a8{/,null,AVAILABLE} 11:50:08 simulator | 2024-02-21 11:47:40,083 INFO Started A&AI simulator@54a67a45{HTTP/1.1, (http/1.1)}{0.0.0.0:6666} 11:50:08 simulator | 2024-02-21 11:47:40,092 INFO Started Server@45905bff{STARTING}[11.0.20,sto=0] @1610ms 11:50:08 simulator | 2024-02-21 11:47:40,093 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-5c18016b==org.glassfish.jersey.servlet.ServletContainer@266aec30{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@45905bff{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@2a2c13a8{/,null,AVAILABLE}, connector=A&AI simulator@54a67a45{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=Thread[A&AI simulator-6666,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-5c18016b==org.glassfish.jersey.servlet.ServletContainer@266aec30{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4339 ms. 11:50:08 simulator | 2024-02-21 11:47:40,099 INFO org.onap.policy.models.simulators starting SDNC simulator 11:50:08 simulator | 2024-02-21 11:47:40,102 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-7e22550a==org.glassfish.jersey.servlet.ServletContainer@615c5c2d{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@45e37a7e{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@62452cc9{/,null,STOPPED}, connector=SDNC simulator@78fbff54{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-7e22550a==org.glassfish.jersey.servlet.ServletContainer@615c5c2d{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 11:50:08 simulator | 2024-02-21 11:47:40,102 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-7e22550a==org.glassfish.jersey.servlet.ServletContainer@615c5c2d{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@45e37a7e{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@62452cc9{/,null,STOPPED}, connector=SDNC simulator@78fbff54{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-7e22550a==org.glassfish.jersey.servlet.ServletContainer@615c5c2d{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 11:50:08 simulator | 2024-02-21 11:47:40,103 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-7e22550a==org.glassfish.jersey.servlet.ServletContainer@615c5c2d{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@45e37a7e{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@62452cc9{/,null,STOPPED}, connector=SDNC simulator@78fbff54{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=Thread[SDNC simulator-6668,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-7e22550a==org.glassfish.jersey.servlet.ServletContainer@615c5c2d{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 11:50:08 simulator | 2024-02-21 11:47:40,104 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.10+7-alpine-r0 11:50:08 simulator | 2024-02-21 11:47:40,119 INFO Session workerName=node0 11:50:08 simulator | 2024-02-21 11:47:40,193 INFO Using GSON for REST calls 11:50:08 simulator | 2024-02-21 11:47:40,207 INFO Started o.e.j.s.ServletContextHandler@62452cc9{/,null,AVAILABLE} 11:50:08 simulator | 2024-02-21 11:47:40,208 INFO Started SDNC simulator@78fbff54{HTTP/1.1, (http/1.1)}{0.0.0.0:6668} 11:50:08 policy-api | Waiting for mariadb port 3306... 11:50:08 policy-api | mariadb (172.17.0.3:3306) open 11:50:08 policy-api | Waiting for policy-db-migrator port 6824... 11:50:08 policy-api | policy-db-migrator (172.17.0.6:6824) open 11:50:08 policy-api | Policy api config file: /opt/app/policy/api/etc/apiParameters.yaml 11:50:08 policy-api | 11:50:08 policy-api | . ____ _ __ _ _ 11:50:08 policy-api | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ 11:50:08 policy-api | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ 11:50:08 policy-api | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) 11:50:08 policy-api | ' |____| .__|_| |_|_| |_\__, | / / / / 11:50:08 policy-api | =========|_|==============|___/=/_/_/_/ 11:50:08 policy-api | :: Spring Boot :: (v3.1.8) 11:50:08 policy-api | 11:50:08 policy-api | [2024-02-21T11:47:47.634+00:00|INFO|PolicyApiApplication|main] Starting PolicyApiApplication using Java 17.0.10 with PID 20 (/app/api.jar started by policy in /opt/app/policy/api/bin) 11:50:08 policy-api | [2024-02-21T11:47:47.635+00:00|INFO|PolicyApiApplication|main] No active profile set, falling back to 1 default profile: "default" 11:50:08 policy-api | [2024-02-21T11:47:49.403+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. 11:50:08 policy-api | [2024-02-21T11:47:49.493+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 80 ms. Found 6 JPA repository interfaces. 11:50:08 policy-api | [2024-02-21T11:47:49.919+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.api.main.exception.ServiceExceptionHandler 11:50:08 policy-api | [2024-02-21T11:47:49.919+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.api.main.exception.ServiceExceptionHandler 11:50:08 policy-api | [2024-02-21T11:47:50.555+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port(s): 6969 (http) 11:50:08 policy-api | [2024-02-21T11:47:50.565+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] 11:50:08 policy-api | [2024-02-21T11:47:50.568+00:00|INFO|StandardService|main] Starting service [Tomcat] 11:50:08 policy-api | [2024-02-21T11:47:50.568+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.18] 11:50:08 policy-api | [2024-02-21T11:47:50.658+00:00|INFO|[/policy/api/v1]|main] Initializing Spring embedded WebApplicationContext 11:50:08 policy-api | [2024-02-21T11:47:50.659+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 2960 ms 11:50:08 policy-api | [2024-02-21T11:47:51.101+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] 11:50:08 policy-api | [2024-02-21T11:47:51.170+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.3.0.CR1 11:50:08 policy-api | [2024-02-21T11:47:51.173+00:00|INFO|Environment|main] HHH000406: Using bytecode reflection optimizer 11:50:08 policy-api | [2024-02-21T11:47:51.222+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled 11:50:08 policy-api | [2024-02-21T11:47:51.574+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer 11:50:08 policy-api | [2024-02-21T11:47:51.599+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... 11:50:08 policy-api | [2024-02-21T11:47:51.707+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.mariadb.jdbc.Connection@7c8d5312 11:50:08 policy-api | [2024-02-21T11:47:51.709+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. 11:50:08 policy-api | [2024-02-21T11:47:51.736+00:00|WARN|deprecation|main] HHH90000025: MariaDB103Dialect does not need to be specified explicitly using 'hibernate.dialect' (remove the property setting and it will be selected by default) 11:50:08 policy-api | [2024-02-21T11:47:51.738+00:00|WARN|deprecation|main] HHH90000026: MariaDB103Dialect has been deprecated; use org.hibernate.dialect.MariaDBDialect instead 11:50:08 policy-api | [2024-02-21T11:47:53.575+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) 11:50:08 policy-api | [2024-02-21T11:47:53.578+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' 11:50:08 policy-api | [2024-02-21T11:47:54.598+00:00|WARN|ApiDatabaseInitializer|main] Detected multi-versioned type: policytypes/onap.policies.monitoring.tcagen2.v2.yaml 11:50:08 policy-api | [2024-02-21T11:47:55.402+00:00|INFO|ApiDatabaseInitializer|main] Multi-versioned Service Template [onap.policies.Monitoring, onap.policies.monitoring.tcagen2] 11:50:08 policy-api | [2024-02-21T11:47:56.498+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning 11:50:08 policy-api | [2024-02-21T11:47:56.704+00:00|INFO|DefaultSecurityFilterChain|main] Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@4bbb00a4, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@543d242e, org.springframework.security.web.context.SecurityContextHolderFilter@62c4ad40, org.springframework.security.web.header.HeaderWriterFilter@4567dcbc, org.springframework.security.web.authentication.logout.LogoutFilter@53d257e7, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@58d291c1, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@9bc10bd, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@2e26841f, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@5f967ad3, org.springframework.security.web.access.ExceptionTranslationFilter@6aca85da, org.springframework.security.web.access.intercept.AuthorizationFilter@2f84848e] 11:50:08 policy-api | [2024-02-21T11:47:57.524+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoint(s) beneath base path '' 11:50:08 policy-api | [2024-02-21T11:47:57.635+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] 11:50:08 policy-api | [2024-02-21T11:47:57.659+00:00|INFO|TomcatWebServer|main] Tomcat started on port(s): 6969 (http) with context path '/policy/api/v1' 11:50:08 policy-api | [2024-02-21T11:47:57.679+00:00|INFO|PolicyApiApplication|main] Started PolicyApiApplication in 10.8 seconds (process running for 11.487) 11:50:08 policy-api | [2024-02-21T11:48:15.136+00:00|INFO|[/policy/api/v1]|http-nio-6969-exec-2] Initializing Spring DispatcherServlet 'dispatcherServlet' 11:50:08 policy-api | [2024-02-21T11:48:15.136+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Initializing Servlet 'dispatcherServlet' 11:50:08 policy-api | [2024-02-21T11:48:15.137+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Completed initialization in 1 ms 11:50:08 policy-api | [2024-02-21T11:48:15.408+00:00|INFO|OrderedServiceImpl|http-nio-6969-exec-2] ***** OrderedServiceImpl implementers: 11:50:08 policy-api | [] 11:50:08 simulator | 2024-02-21 11:47:40,208 INFO Started Server@45e37a7e{STARTING}[11.0.20,sto=0] @1727ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.983738304Z level=info msg="Migration successfully executed" id="create short_url table v1" duration=970.539µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.988585975Z level=info msg="Executing migration" id="add index short_url.org_id-uid" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.990384804Z level=info msg="Migration successfully executed" id="add index short_url.org_id-uid" duration=1.798449ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.994531056Z level=info msg="Executing migration" id="alter table short_url alter column created_by type to bigint" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.994783208Z level=info msg="Migration successfully executed" id="alter table short_url alter column created_by type to bigint" duration=251.632µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.998527348Z level=info msg="Executing migration" id="delete alert_definition table" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:43.99869861Z level=info msg="Migration successfully executed" id="delete alert_definition table" duration=170.672µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.004931564Z level=info msg="Executing migration" id="recreate alert_definition table" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.006366369Z level=info msg="Migration successfully executed" id="recreate alert_definition table" duration=1.433385ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.014666875Z level=info msg="Executing migration" id="add index in alert_definition on org_id and title columns" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.016353521Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and title columns" duration=1.688367ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.021538905Z level=info msg="Executing migration" id="add index in alert_definition on org_id and uid columns" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.022628656Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and uid columns" duration=1.084141ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.02976355Z level=info msg="Executing migration" id="alter alert_definition table data column to mediumtext in mysql" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.030015873Z level=info msg="Migration successfully executed" id="alter alert_definition table data column to mediumtext in mysql" duration=251.703µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.03653756Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and title columns" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.038497381Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and title columns" duration=1.95961ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.04823311Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and uid columns" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.049394073Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and uid columns" duration=1.160883ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.054589567Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and title columns" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.056392105Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and title columns" duration=1.802988ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.061050283Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and uid columns" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.062397068Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and uid columns" duration=1.336854ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.068154087Z level=info msg="Executing migration" id="Add column paused in alert_definition" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.074670854Z level=info msg="Migration successfully executed" id="Add column paused in alert_definition" duration=6.512417ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.08104713Z level=info msg="Executing migration" id="drop alert_definition table" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.082244833Z level=info msg="Migration successfully executed" id="drop alert_definition table" duration=1.198173ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.088139013Z level=info msg="Executing migration" id="delete alert_definition_version table" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.088412256Z level=info msg="Migration successfully executed" id="delete alert_definition_version table" duration=273.073µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.095200056Z level=info msg="Executing migration" id="recreate alert_definition_version table" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.096884153Z level=info msg="Migration successfully executed" id="recreate alert_definition_version table" duration=1.683657ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.10144178Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_id and version columns" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.103752144Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_id and version columns" duration=2.312524ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.111690836Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_uid and version columns" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.113242523Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_uid and version columns" duration=1.553117ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.118715419Z level=info msg="Executing migration" id="alter alert_definition_version table data column to mediumtext in mysql" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.11882264Z level=info msg="Migration successfully executed" id="alter alert_definition_version table data column to mediumtext in mysql" duration=109.981µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.123821641Z level=info msg="Executing migration" id="drop alert_definition_version table" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.125052995Z level=info msg="Migration successfully executed" id="drop alert_definition_version table" duration=1.231254ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.129272907Z level=info msg="Executing migration" id="create alert_instance table" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.130247948Z level=info msg="Migration successfully executed" id="create alert_instance table" duration=965.571µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.135704654Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.136763285Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" duration=1.058421ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.140822618Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, current_state columns" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.142651276Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, current_state columns" duration=1.827198ms 11:50:08 simulator | 2024-02-21 11:47:40,208 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-7e22550a==org.glassfish.jersey.servlet.ServletContainer@615c5c2d{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@45e37a7e{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@62452cc9{/,null,AVAILABLE}, connector=SDNC simulator@78fbff54{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=Thread[SDNC simulator-6668,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-7e22550a==org.glassfish.jersey.servlet.ServletContainer@615c5c2d{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4895 ms. 11:50:08 simulator | 2024-02-21 11:47:40,209 INFO org.onap.policy.models.simulators starting SO simulator 11:50:08 simulator | 2024-02-21 11:47:40,213 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-1021f6c9==org.glassfish.jersey.servlet.ServletContainer@fe7342f2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@7516e4e5{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@488eb7f2{/,null,STOPPED}, connector=SO simulator@5a7005d{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-1021f6c9==org.glassfish.jersey.servlet.ServletContainer@fe7342f2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 11:50:08 simulator | 2024-02-21 11:47:40,213 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-1021f6c9==org.glassfish.jersey.servlet.ServletContainer@fe7342f2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@7516e4e5{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@488eb7f2{/,null,STOPPED}, connector=SO simulator@5a7005d{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-1021f6c9==org.glassfish.jersey.servlet.ServletContainer@fe7342f2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 11:50:08 simulator | 2024-02-21 11:47:40,214 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-1021f6c9==org.glassfish.jersey.servlet.ServletContainer@fe7342f2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@7516e4e5{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@488eb7f2{/,null,STOPPED}, connector=SO simulator@5a7005d{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=Thread[SO simulator-6669,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-1021f6c9==org.glassfish.jersey.servlet.ServletContainer@fe7342f2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 11:50:08 simulator | 2024-02-21 11:47:40,215 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.10+7-alpine-r0 11:50:08 simulator | 2024-02-21 11:47:40,218 INFO Session workerName=node0 11:50:08 simulator | 2024-02-21 11:47:40,300 INFO Using GSON for REST calls 11:50:08 simulator | 2024-02-21 11:47:40,315 INFO Started o.e.j.s.ServletContextHandler@488eb7f2{/,null,AVAILABLE} 11:50:08 simulator | 2024-02-21 11:47:40,317 INFO Started SO simulator@5a7005d{HTTP/1.1, (http/1.1)}{0.0.0.0:6669} 11:50:08 simulator | 2024-02-21 11:47:40,317 INFO Started Server@7516e4e5{STARTING}[11.0.20,sto=0] @1836ms 11:50:08 simulator | 2024-02-21 11:47:40,317 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-1021f6c9==org.glassfish.jersey.servlet.ServletContainer@fe7342f2{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@7516e4e5{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@488eb7f2{/,null,AVAILABLE}, connector=SO simulator@5a7005d{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=Thread[SO simulator-6669,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-1021f6c9==org.glassfish.jersey.servlet.ServletContainer@fe7342f2{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4897 ms. 11:50:08 simulator | 2024-02-21 11:47:40,318 INFO org.onap.policy.models.simulators starting VFC simulator 11:50:08 simulator | 2024-02-21 11:47:40,321 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-3e7634b9==org.glassfish.jersey.servlet.ServletContainer@ca975b37{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@6f0b0a5e{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@6035b93b{/,null,STOPPED}, connector=VFC simulator@4189d70b{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-3e7634b9==org.glassfish.jersey.servlet.ServletContainer@ca975b37{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 11:50:08 simulator | 2024-02-21 11:47:40,321 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-3e7634b9==org.glassfish.jersey.servlet.ServletContainer@ca975b37{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@6f0b0a5e{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@6035b93b{/,null,STOPPED}, connector=VFC simulator@4189d70b{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-3e7634b9==org.glassfish.jersey.servlet.ServletContainer@ca975b37{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 11:50:08 simulator | 2024-02-21 11:47:40,322 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-3e7634b9==org.glassfish.jersey.servlet.ServletContainer@ca975b37{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@6f0b0a5e{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@6035b93b{/,null,STOPPED}, connector=VFC simulator@4189d70b{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=Thread[VFC simulator-6670,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-3e7634b9==org.glassfish.jersey.servlet.ServletContainer@ca975b37{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 11:50:08 simulator | 2024-02-21 11:47:40,322 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.10+7-alpine-r0 11:50:08 simulator | 2024-02-21 11:47:40,333 INFO Session workerName=node0 11:50:08 simulator | 2024-02-21 11:47:40,377 INFO Using GSON for REST calls 11:50:08 simulator | 2024-02-21 11:47:40,385 INFO Started o.e.j.s.ServletContextHandler@6035b93b{/,null,AVAILABLE} 11:50:08 simulator | 2024-02-21 11:47:40,387 INFO Started VFC simulator@4189d70b{HTTP/1.1, (http/1.1)}{0.0.0.0:6670} 11:50:08 simulator | 2024-02-21 11:47:40,387 INFO Started Server@6f0b0a5e{STARTING}[11.0.20,sto=0] @1906ms 11:50:08 simulator | 2024-02-21 11:47:40,387 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-3e7634b9==org.glassfish.jersey.servlet.ServletContainer@ca975b37{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@6f0b0a5e{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@6035b93b{/,null,AVAILABLE}, connector=VFC simulator@4189d70b{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=Thread[VFC simulator-6670,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-3e7634b9==org.glassfish.jersey.servlet.ServletContainer@ca975b37{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4935 ms. 11:50:08 simulator | 2024-02-21 11:47:40,388 INFO org.onap.policy.models.simulators started 11:50:08 kafka | [2024-02-21 11:48:11,841] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NewReplica to OnlineReplica (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,841] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NewReplica to OnlineReplica (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,841] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NewReplica to OnlineReplica (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,841] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NewReplica to OnlineReplica (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,841] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NewReplica to OnlineReplica (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,841] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NewReplica to OnlineReplica (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,841] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NewReplica to OnlineReplica (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,841] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NewReplica to OnlineReplica (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,841] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NewReplica to OnlineReplica (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,841] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NewReplica to OnlineReplica (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,841] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NewReplica to OnlineReplica (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,842] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NewReplica to OnlineReplica (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,842] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NewReplica to OnlineReplica (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,842] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NewReplica to OnlineReplica (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,842] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NewReplica to OnlineReplica (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,842] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NewReplica to OnlineReplica (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,842] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NewReplica to OnlineReplica (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,842] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NewReplica to OnlineReplica (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,842] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NewReplica to OnlineReplica (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,842] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NewReplica to OnlineReplica (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,842] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NewReplica to OnlineReplica (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,842] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NewReplica to OnlineReplica (state.change.logger) 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | > upgrade 0620-toscanodetemplates_toscanodetemplate.sql 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplates_toscanodetemplate (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCANODETEMPLATES_TOSCANODETEMPLATE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | > upgrade 0630-toscanodetype.sql 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, requirementsVersion VARCHAR(20) NULL, requirementsName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCANODETYPE (name, version)) 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | > upgrade 0640-toscanodetypes.sql 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCANODETYPES (name, version)) 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | > upgrade 0650-toscanodetypes_toscanodetype.sql 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetypes_toscanodetype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCANODETYPES_TOSCANODETYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | > upgrade 0660-toscaparameter.sql 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-pap | sasl.oauthbearer.sub.claim.name = sub 11:50:08 policy-pap | sasl.oauthbearer.token.endpoint.url = null 11:50:08 policy-pap | security.protocol = PLAINTEXT 11:50:08 policy-pap | security.providers = null 11:50:08 policy-pap | send.buffer.bytes = 131072 11:50:08 policy-pap | session.timeout.ms = 45000 11:50:08 policy-pap | socket.connection.setup.timeout.max.ms = 30000 11:50:08 policy-pap | socket.connection.setup.timeout.ms = 10000 11:50:08 policy-pap | ssl.cipher.suites = null 11:50:08 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 11:50:08 policy-pap | ssl.endpoint.identification.algorithm = https 11:50:08 policy-pap | ssl.engine.factory.class = null 11:50:08 policy-pap | ssl.key.password = null 11:50:08 policy-pap | ssl.keymanager.algorithm = SunX509 11:50:08 policy-pap | ssl.keystore.certificate.chain = null 11:50:08 policy-pap | ssl.keystore.key = null 11:50:08 policy-pap | ssl.keystore.location = null 11:50:08 policy-pap | ssl.keystore.password = null 11:50:08 policy-pap | ssl.keystore.type = JKS 11:50:08 policy-pap | ssl.protocol = TLSv1.3 11:50:08 policy-pap | ssl.provider = null 11:50:08 policy-pap | ssl.secure.random.implementation = null 11:50:08 policy-pap | ssl.trustmanager.algorithm = PKIX 11:50:08 policy-pap | ssl.truststore.certificates = null 11:50:08 policy-pap | ssl.truststore.location = null 11:50:08 policy-pap | ssl.truststore.password = null 11:50:08 policy-pap | ssl.truststore.type = JKS 11:50:08 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 11:50:08 policy-pap | 11:50:08 policy-pap | [2024-02-21T11:48:09.330+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 11:50:08 policy-pap | [2024-02-21T11:48:09.330+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 11:50:08 policy-pap | [2024-02-21T11:48:09.330+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1708516089330 11:50:08 policy-pap | [2024-02-21T11:48:09.331+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-policy-pap-2, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap 11:50:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaparameter (VALUE VARCHAR(255) NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPARAMETER (parentLocalName, localName, parentKeyVersion, parentKeyName)) 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | > upgrade 0670-toscapolicies.sql 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicies (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICIES (name, version)) 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | > upgrade 0680-toscapolicies_toscapolicy.sql 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicies_toscapolicy (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPOLICIES_TOSCAPOLICY (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | > upgrade 0690-toscapolicy.sql 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicy (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCAPOLICY (name, version)) 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | > upgrade 0700-toscapolicytype.sql 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICYTYPE (name, version)) 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | > upgrade 0710-toscapolicytypes.sql 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICYTYPES (name, version)) 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | > upgrade 0720-toscapolicytypes_toscapolicytype.sql 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytypes_toscapolicytype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPOLICYTYPES_TOSCAPOLICYTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | > upgrade 0730-toscaproperty.sql 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaproperty (DEFAULTVALUE VARCHAR(255) DEFAULT NULL, `DESCRIPTION` VARCHAR(255) DEFAULT NULL, ENTRYSCHEMA LONGBLOB DEFAULT NULL, REQUIRED BOOLEAN DEFAULT 0, STATUS INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PRIMARY KEY PK_TOSCAPROPERTY (parentLocalName, localName, parentKeyVersion, parentKeyName)) 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | > upgrade 0740-toscarelationshiptype.sql 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPE (name, version)) 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | > upgrade 0750-toscarelationshiptypes.sql 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPES (name, version)) 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | > upgrade 0760-toscarelationshiptypes_toscarelationshiptype.sql 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptypes_toscarelationshiptype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPES_TOSCARELATIONSHIPTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | > upgrade 0770-toscarequirement.sql 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirement (CAPABILITY VARCHAR(255) NULL, `DESCRIPTION` VARCHAR(255) NULL, NODE VARCHAR(255) NULL, RELATIONSHIP VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCAREQUIREMENT (name, version)) 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | > upgrade 0780-toscarequirements.sql 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirements (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAREQUIREMENTS (name, version)) 11:50:08 policy-pap | [2024-02-21T11:48:09.688+00:00|INFO|PapDatabaseInitializer|main] Created initial pdpGroup in DB - PdpGroups(groups=[PdpGroup(name=defaultGroup, description=The default group that registers all supported policy types and pdps., pdpGroupState=ACTIVE, properties=null, pdpSubgroups=[PdpSubGroup(pdpType=apex, supportedPolicyTypes=[onap.policies.controlloop.operational.common.Apex 1.0.0, onap.policies.native.Apex 1.0.0], policies=[], currentInstanceCount=0, desiredInstanceCount=1, properties=null, pdpInstances=null)])]) from /opt/app/policy/pap/etc/mounted/groups.json 11:50:08 policy-pap | [2024-02-21T11:48:09.868+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning 11:50:08 policy-pap | [2024-02-21T11:48:10.134+00:00|INFO|DefaultSecurityFilterChain|main] Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@42e4431, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@5e198c40, org.springframework.security.web.context.SecurityContextHolderFilter@416c1b0, org.springframework.security.web.header.HeaderWriterFilter@565c887e, org.springframework.security.web.authentication.logout.LogoutFilter@1734b1a, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@70b1028d, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@60fe75f7, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@7c8f803d, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@dcdb883, org.springframework.security.web.access.ExceptionTranslationFilter@426913c4, org.springframework.security.web.access.intercept.AuthorizationFilter@44c2e8a8] 11:50:08 policy-pap | [2024-02-21T11:48:10.961+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoint(s) beneath base path '' 11:50:08 policy-pap | [2024-02-21T11:48:11.067+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] 11:50:08 policy-pap | [2024-02-21T11:48:11.087+00:00|INFO|TomcatWebServer|main] Tomcat started on port(s): 6969 (http) with context path '/policy/pap/v1' 11:50:08 policy-pap | [2024-02-21T11:48:11.107+00:00|INFO|ServiceManager|main] Policy PAP starting 11:50:08 policy-pap | [2024-02-21T11:48:11.108+00:00|INFO|ServiceManager|main] Policy PAP starting Meter Registry 11:50:08 policy-pap | [2024-02-21T11:48:11.109+00:00|INFO|ServiceManager|main] Policy PAP starting PAP parameters 11:50:08 policy-pap | [2024-02-21T11:48:11.110+00:00|INFO|ServiceManager|main] Policy PAP starting Pdp Heartbeat Listener 11:50:08 policy-pap | [2024-02-21T11:48:11.110+00:00|INFO|ServiceManager|main] Policy PAP starting Response Request ID Dispatcher 11:50:08 policy-pap | [2024-02-21T11:48:11.110+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Request ID Dispatcher 11:50:08 policy-pap | [2024-02-21T11:48:11.110+00:00|INFO|ServiceManager|main] Policy PAP starting Response Message Dispatcher 11:50:08 policy-pap | [2024-02-21T11:48:11.114+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=deda7b7f-e78f-4ba0-9889-983a637c2ccd, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@6442cf3e 11:50:08 policy-pap | [2024-02-21T11:48:11.124+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=deda7b7f-e78f-4ba0-9889-983a637c2ccd, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 11:50:08 policy-pap | [2024-02-21T11:48:11.125+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 11:50:08 policy-pap | allow.auto.create.topics = true 11:50:08 policy-pap | auto.commit.interval.ms = 5000 11:50:08 policy-pap | auto.include.jmx.reporter = true 11:50:08 policy-pap | auto.offset.reset = latest 11:50:08 policy-pap | bootstrap.servers = [kafka:9092] 11:50:08 policy-pap | check.crcs = true 11:50:08 policy-pap | client.dns.lookup = use_all_dns_ips 11:50:08 policy-pap | client.id = consumer-deda7b7f-e78f-4ba0-9889-983a637c2ccd-3 11:50:08 policy-pap | client.rack = 11:50:08 policy-pap | connections.max.idle.ms = 540000 11:50:08 policy-pap | default.api.timeout.ms = 60000 11:50:08 policy-pap | enable.auto.commit = true 11:50:08 policy-pap | exclude.internal.topics = true 11:50:08 policy-pap | fetch.max.bytes = 52428800 11:50:08 policy-pap | fetch.max.wait.ms = 500 11:50:08 policy-pap | fetch.min.bytes = 1 11:50:08 policy-pap | group.id = deda7b7f-e78f-4ba0-9889-983a637c2ccd 11:50:08 policy-pap | group.instance.id = null 11:50:08 policy-pap | heartbeat.interval.ms = 3000 11:50:08 policy-pap | interceptor.classes = [] 11:50:08 policy-pap | internal.leave.group.on.close = true 11:50:08 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 11:50:08 policy-pap | isolation.level = read_uncommitted 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | > upgrade 0790-toscarequirements_toscarequirement.sql 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirements_toscarequirement (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAREQUIREMENTS_TOSCAREQUIREMENT (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | > upgrade 0800-toscaservicetemplate.sql 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaservicetemplate (`DESCRIPTION` VARCHAR(255) NULL, TOSCADEFINITIONSVERSION VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, capabilityTypesVersion VARCHAR(20) NULL, capabilityTypesName VARCHAR(120) NULL, dataTypesName VARCHAR(120) NULL, dataTypesVersion VARCHAR(20) NULL, nodeTypesVersion VARCHAR(20) NULL, nodeTypesName VARCHAR(120) NULL, policyTypesName VARCHAR(120) NULL, policyTypesVersion VARCHAR(20) NULL, relationshipTypesVersion VARCHAR(20) NULL, relationshipTypesName VARCHAR(120) NULL, topologyTemplateLocalName VARCHAR(120) NULL, topologyTemplateParentKeyName VARCHAR(120) NULL, topologyTemplateParentKeyVersion VARCHAR(15) NULL, topologyTemplateParentLocalName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCASERVICETEMPLATE (name, version)) 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | > upgrade 0810-toscatopologytemplate.sql 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscatopologytemplate (`description` VARCHAR(255) NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, nodeTemplatessVersion VARCHAR(20) NULL, nodeTemplatesName VARCHAR(120) NULL, policyVersion VARCHAR(20) NULL, policyName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCATOPOLOGYTEMPLATE (parentLocalName, localName, parentKeyVersion, parentKeyName)) 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | > upgrade 0820-toscatrigger.sql 11:50:08 policy-db-migrator | -------------- 11:50:08 kafka | [2024-02-21 11:48:11,842] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NewReplica to OnlineReplica (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,842] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NewReplica to OnlineReplica (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,842] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NewReplica to OnlineReplica (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,843] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NewReplica to OnlineReplica (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,843] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NewReplica to OnlineReplica (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,843] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NewReplica to OnlineReplica (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,843] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NewReplica to OnlineReplica (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,843] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NewReplica to OnlineReplica (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,843] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NewReplica to OnlineReplica (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,843] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,856] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 for 51 partitions (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,860] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,860] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,860] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,860] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,860] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,861] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,861] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,861] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,861] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,861] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,861] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,861] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,861] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,861] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,861] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,861] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,861] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,861] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,861] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,861] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,861] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,861] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,861] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,861] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,861] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,861] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.147296944Z level=info msg="Executing migration" id="add column current_state_end to alert_instance" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.15370745Z level=info msg="Migration successfully executed" id="add column current_state_end to alert_instance" duration=6.410786ms 11:50:08 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 11:50:08 policy-pap | max.partition.fetch.bytes = 1048576 11:50:08 policy-pap | max.poll.interval.ms = 300000 11:50:08 policy-pap | max.poll.records = 500 11:50:08 policy-pap | metadata.max.age.ms = 300000 11:50:08 policy-pap | metric.reporters = [] 11:50:08 policy-pap | metrics.num.samples = 2 11:50:08 policy-pap | metrics.recording.level = INFO 11:50:08 policy-pap | metrics.sample.window.ms = 30000 11:50:08 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 11:50:08 policy-pap | receive.buffer.bytes = 65536 11:50:08 policy-pap | reconnect.backoff.max.ms = 1000 11:50:08 policy-pap | reconnect.backoff.ms = 50 11:50:08 policy-pap | request.timeout.ms = 30000 11:50:08 policy-pap | retry.backoff.ms = 100 11:50:08 policy-pap | sasl.client.callback.handler.class = null 11:50:08 policy-pap | sasl.jaas.config = null 11:50:08 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 11:50:08 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 11:50:08 policy-pap | sasl.kerberos.service.name = null 11:50:08 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 11:50:08 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 11:50:08 policy-pap | sasl.login.callback.handler.class = null 11:50:08 policy-pap | sasl.login.class = null 11:50:08 policy-pap | sasl.login.connect.timeout.ms = null 11:50:08 policy-pap | sasl.login.read.timeout.ms = null 11:50:08 policy-pap | sasl.login.refresh.buffer.seconds = 300 11:50:08 policy-pap | sasl.login.refresh.min.period.seconds = 60 11:50:08 policy-pap | sasl.login.refresh.window.factor = 0.8 11:50:08 policy-pap | sasl.login.refresh.window.jitter = 0.05 11:50:08 policy-pap | sasl.login.retry.backoff.max.ms = 10000 11:50:08 policy-pap | sasl.login.retry.backoff.ms = 100 11:50:08 policy-pap | sasl.mechanism = GSSAPI 11:50:08 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 11:50:08 policy-pap | sasl.oauthbearer.expected.audience = null 11:50:08 policy-pap | sasl.oauthbearer.expected.issuer = null 11:50:08 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 11:50:08 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 11:50:08 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 11:50:08 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 11:50:08 policy-pap | sasl.oauthbearer.scope.claim.name = scope 11:50:08 policy-pap | sasl.oauthbearer.sub.claim.name = sub 11:50:08 policy-pap | sasl.oauthbearer.token.endpoint.url = null 11:50:08 policy-pap | security.protocol = PLAINTEXT 11:50:08 policy-pap | security.providers = null 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.162252568Z level=info msg="Executing migration" id="remove index def_org_id, def_uid, current_state on alert_instance" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.163446191Z level=info msg="Migration successfully executed" id="remove index def_org_id, def_uid, current_state on alert_instance" duration=1.193733ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.170782536Z level=info msg="Executing migration" id="remove index def_org_id, current_state on alert_instance" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.17202202Z level=info msg="Migration successfully executed" id="remove index def_org_id, current_state on alert_instance" duration=1.242754ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.177777809Z level=info msg="Executing migration" id="rename def_org_id to rule_org_id in alert_instance" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.223285829Z level=info msg="Migration successfully executed" id="rename def_org_id to rule_org_id in alert_instance" duration=45.50533ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.296454704Z level=info msg="Executing migration" id="rename def_uid to rule_uid in alert_instance" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.336061403Z level=info msg="Migration successfully executed" id="rename def_uid to rule_uid in alert_instance" duration=39.606969ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.339350667Z level=info msg="Executing migration" id="add index rule_org_id, rule_uid, current_state on alert_instance" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.340307048Z level=info msg="Migration successfully executed" id="add index rule_org_id, rule_uid, current_state on alert_instance" duration=956.031µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.347020096Z level=info msg="Executing migration" id="add index rule_org_id, current_state on alert_instance" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.348053068Z level=info msg="Migration successfully executed" id="add index rule_org_id, current_state on alert_instance" duration=1.032652ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.353460063Z level=info msg="Executing migration" id="add current_reason column related to current_state" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.359103681Z level=info msg="Migration successfully executed" id="add current_reason column related to current_state" duration=5.643108ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.370489079Z level=info msg="Executing migration" id="add result_fingerprint column to alert_instance" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.379573093Z level=info msg="Migration successfully executed" id="add result_fingerprint column to alert_instance" duration=9.084274ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.385736576Z level=info msg="Executing migration" id="create alert_rule table" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.386569585Z level=info msg="Migration successfully executed" id="create alert_rule table" duration=824.559µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.390448075Z level=info msg="Executing migration" id="add index in alert_rule on org_id and title columns" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.39195814Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and title columns" duration=1.509455ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.39570996Z level=info msg="Executing migration" id="add index in alert_rule on org_id and uid columns" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.397204145Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and uid columns" duration=1.493865ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.402976794Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.404150447Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" duration=1.173343ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.410766445Z level=info msg="Executing migration" id="alter alert_rule table data column to mediumtext in mysql" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.410866346Z level=info msg="Migration successfully executed" id="alter alert_rule table data column to mediumtext in mysql" duration=101.161µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.41710024Z level=info msg="Executing migration" id="add column for to alert_rule" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.424357765Z level=info msg="Migration successfully executed" id="add column for to alert_rule" duration=7.258655ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.430098125Z level=info msg="Executing migration" id="add column annotations to alert_rule" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.436180157Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule" duration=6.081592ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.441527773Z level=info msg="Executing migration" id="add column labels to alert_rule" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.451403174Z level=info msg="Migration successfully executed" id="add column labels to alert_rule" duration=9.875431ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.454942141Z level=info msg="Executing migration" id="remove unique index from alert_rule on org_id, title columns" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.455678069Z level=info msg="Migration successfully executed" id="remove unique index from alert_rule on org_id, title columns" duration=735.928µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.486482707Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespase_uid and title columns" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.488240235Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespase_uid and title columns" duration=1.757408ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.495020015Z level=info msg="Executing migration" id="add dashboard_uid column to alert_rule" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.500791704Z level=info msg="Migration successfully executed" id="add dashboard_uid column to alert_rule" duration=5.772469ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.504558033Z level=info msg="Executing migration" id="add panel_id column to alert_rule" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.510509105Z level=info msg="Migration successfully executed" id="add panel_id column to alert_rule" duration=5.950522ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.517316685Z level=info msg="Executing migration" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.518348866Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" duration=1.032011ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.522332307Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.530393561Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule" duration=8.061934ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.533993477Z level=info msg="Executing migration" id="add is_paused column to alert_rule table" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.540235702Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule table" duration=6.241645ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.548831111Z level=info msg="Executing migration" id="fix is_paused column for alert_rule table" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.548931882Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule table" duration=98.921µs 11:50:08 policy-pap | send.buffer.bytes = 131072 11:50:08 policy-pap | session.timeout.ms = 45000 11:50:08 policy-pap | socket.connection.setup.timeout.max.ms = 30000 11:50:08 policy-pap | socket.connection.setup.timeout.ms = 10000 11:50:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscatrigger (ACTION VARCHAR(255) NULL, toscaCondition LONGBLOB DEFAULT NULL, toscaConstraint LONGBLOB DEFAULT NULL, `DESCRIPTION` VARCHAR(255) NULL, EVALUATIONS INT DEFAULT NULL, EVENTTYPE VARCHAR(255) NULL, METHOD VARCHAR(255) NULL, `PERIOD` LONGBLOB DEFAULT NULL, SCHEDULE LONGBLOB DEFAULT NULL, TARGETFILTER LONGBLOB DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_TOSCATRIGGER (parentLocalName, localName, parentKeyVersion, parentKeyName)) 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | > upgrade 0830-FK_ToscaNodeTemplate_capabilitiesName.sql 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | CREATE INDEX FK_ToscaNodeTemplate_capabilitiesName ON toscanodetemplate(capabilitiesName, capabilitiesVersion) 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | > upgrade 0840-FK_ToscaNodeTemplate_requirementsName.sql 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | CREATE INDEX FK_ToscaNodeTemplate_requirementsName ON toscanodetemplate(requirementsName, requirementsVersion) 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | > upgrade 0850-FK_ToscaNodeType_requirementsName.sql 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | CREATE INDEX FK_ToscaNodeType_requirementsName ON toscanodetype(requirementsName, requirementsVersion) 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | > upgrade 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_capabilityTypesName ON toscaservicetemplate(capabilityTypesName, capabilityTypesVersion) 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | > upgrade 0870-FK_ToscaServiceTemplate_dataTypesName.sql 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_dataTypesName ON toscaservicetemplate(dataTypesName, dataTypesVersion) 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | > upgrade 0880-FK_ToscaServiceTemplate_nodeTypesName.sql 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_nodeTypesName ON toscaservicetemplate(nodeTypesName, nodeTypesVersion) 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-pap | ssl.cipher.suites = null 11:50:08 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 11:50:08 policy-pap | ssl.endpoint.identification.algorithm = https 11:50:08 policy-pap | ssl.engine.factory.class = null 11:50:08 policy-pap | ssl.key.password = null 11:50:08 policy-pap | ssl.keymanager.algorithm = SunX509 11:50:08 policy-pap | ssl.keystore.certificate.chain = null 11:50:08 policy-pap | ssl.keystore.key = null 11:50:08 policy-pap | ssl.keystore.location = null 11:50:08 policy-pap | ssl.keystore.password = null 11:50:08 policy-pap | ssl.keystore.type = JKS 11:50:08 policy-pap | ssl.protocol = TLSv1.3 11:50:08 policy-pap | ssl.provider = null 11:50:08 policy-pap | ssl.secure.random.implementation = null 11:50:08 policy-pap | ssl.trustmanager.algorithm = PKIX 11:50:08 policy-pap | ssl.truststore.certificates = null 11:50:08 policy-pap | ssl.truststore.location = null 11:50:08 policy-pap | ssl.truststore.password = null 11:50:08 policy-pap | ssl.truststore.type = JKS 11:50:08 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 11:50:08 policy-pap | 11:50:08 policy-pap | [2024-02-21T11:48:11.131+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 11:50:08 policy-pap | [2024-02-21T11:48:11.131+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 11:50:08 policy-pap | [2024-02-21T11:48:11.131+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1708516091131 11:50:08 policy-pap | [2024-02-21T11:48:11.132+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-deda7b7f-e78f-4ba0-9889-983a637c2ccd-3, groupId=deda7b7f-e78f-4ba0-9889-983a637c2ccd] Subscribed to topic(s): policy-pdp-pap 11:50:08 policy-pap | [2024-02-21T11:48:11.132+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Message Dispatcher 11:50:08 policy-pap | [2024-02-21T11:48:11.132+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=b05e8157-faba-4694-b27b-3d12a76e3107, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@447630c4 11:50:08 policy-pap | [2024-02-21T11:48:11.132+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=b05e8157-faba-4694-b27b-3d12a76e3107, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 11:50:08 policy-pap | [2024-02-21T11:48:11.133+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 11:50:08 policy-pap | allow.auto.create.topics = true 11:50:08 policy-pap | auto.commit.interval.ms = 5000 11:50:08 policy-pap | auto.include.jmx.reporter = true 11:50:08 policy-pap | auto.offset.reset = latest 11:50:08 policy-pap | bootstrap.servers = [kafka:9092] 11:50:08 policy-pap | check.crcs = true 11:50:08 policy-pap | client.dns.lookup = use_all_dns_ips 11:50:08 policy-pap | client.id = consumer-policy-pap-4 11:50:08 policy-pap | client.rack = 11:50:08 policy-pap | connections.max.idle.ms = 540000 11:50:08 policy-pap | default.api.timeout.ms = 60000 11:50:08 policy-pap | enable.auto.commit = true 11:50:08 policy-pap | exclude.internal.topics = true 11:50:08 policy-pap | fetch.max.bytes = 52428800 11:50:08 policy-pap | fetch.max.wait.ms = 500 11:50:08 policy-pap | fetch.min.bytes = 1 11:50:08 policy-pap | group.id = policy-pap 11:50:08 policy-pap | group.instance.id = null 11:50:08 policy-pap | heartbeat.interval.ms = 3000 11:50:08 policy-pap | interceptor.classes = [] 11:50:08 policy-pap | internal.leave.group.on.close = true 11:50:08 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | > upgrade 0890-FK_ToscaServiceTemplate_policyTypesName.sql 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_policyTypesName ON toscaservicetemplate(policyTypesName, policyTypesVersion) 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | > upgrade 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_relationshipTypesName ON toscaservicetemplate(relationshipTypesName, relationshipTypesVersion) 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | > upgrade 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | CREATE INDEX FK_ToscaTopologyTemplate_nodeTemplatesName ON toscatopologytemplate(nodeTemplatesName, nodeTemplatessVersion) 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | > upgrade 0920-FK_ToscaTopologyTemplate_policyName.sql 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | CREATE INDEX FK_ToscaTopologyTemplate_policyName ON toscatopologytemplate(policyName, policyVersion) 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | > upgrade 0940-PdpPolicyStatus_PdpGroup.sql 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | CREATE INDEX PdpPolicyStatus_PdpGroup ON pdppolicystatus(PDPGROUP) 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | > upgrade 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | CREATE INDEX TscaServiceTemplatetopologyTemplateParentLocalName ON toscaservicetemplate(topologyTemplateParentLocalName, topologyTemplateLocalName, topologyTemplateParentKeyVersion, topologyTemplateParentKeyName) 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | > upgrade 0960-FK_ToscaNodeTemplate_capabilitiesName.sql 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | ALTER TABLE toscanodetemplate ADD CONSTRAINT FK_ToscaNodeTemplate_capabilitiesName FOREIGN KEY (capabilitiesName, capabilitiesVersion) REFERENCES toscacapabilityassignments (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | > upgrade 0970-FK_ToscaNodeTemplate_requirementsName.sql 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | ALTER TABLE toscanodetemplate ADD CONSTRAINT FK_ToscaNodeTemplate_requirementsName FOREIGN KEY (requirementsName, requirementsVersion) REFERENCES toscarequirements (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | > upgrade 0980-FK_ToscaNodeType_requirementsName.sql 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | ALTER TABLE toscanodetype ADD CONSTRAINT FK_ToscaNodeType_requirementsName FOREIGN KEY (requirementsName, requirementsVersion) REFERENCES toscarequirements (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | > upgrade 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_capabilityTypesName FOREIGN KEY (capabilityTypesName, capabilityTypesVersion) REFERENCES toscacapabilitytypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | > upgrade 1000-FK_ToscaServiceTemplate_dataTypesName.sql 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_dataTypesName FOREIGN KEY (dataTypesName, dataTypesVersion) REFERENCES toscadatatypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | > upgrade 1010-FK_ToscaServiceTemplate_nodeTypesName.sql 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_nodeTypesName FOREIGN KEY (nodeTypesName, nodeTypesVersion) REFERENCES toscanodetypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | > upgrade 1020-FK_ToscaServiceTemplate_policyTypesName.sql 11:50:08 policy-pap | isolation.level = read_uncommitted 11:50:08 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 11:50:08 policy-pap | max.partition.fetch.bytes = 1048576 11:50:08 policy-pap | max.poll.interval.ms = 300000 11:50:08 policy-pap | max.poll.records = 500 11:50:08 policy-pap | metadata.max.age.ms = 300000 11:50:08 policy-pap | metric.reporters = [] 11:50:08 policy-pap | metrics.num.samples = 2 11:50:08 policy-pap | metrics.recording.level = INFO 11:50:08 policy-pap | metrics.sample.window.ms = 30000 11:50:08 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 11:50:08 policy-pap | receive.buffer.bytes = 65536 11:50:08 policy-pap | reconnect.backoff.max.ms = 1000 11:50:08 policy-pap | reconnect.backoff.ms = 50 11:50:08 policy-pap | request.timeout.ms = 30000 11:50:08 policy-pap | retry.backoff.ms = 100 11:50:08 policy-pap | sasl.client.callback.handler.class = null 11:50:08 policy-pap | sasl.jaas.config = null 11:50:08 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 11:50:08 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 11:50:08 policy-pap | sasl.kerberos.service.name = null 11:50:08 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 11:50:08 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 11:50:08 policy-pap | sasl.login.callback.handler.class = null 11:50:08 policy-pap | sasl.login.class = null 11:50:08 policy-pap | sasl.login.connect.timeout.ms = null 11:50:08 policy-pap | sasl.login.read.timeout.ms = null 11:50:08 policy-pap | sasl.login.refresh.buffer.seconds = 300 11:50:08 policy-pap | sasl.login.refresh.min.period.seconds = 60 11:50:08 policy-pap | sasl.login.refresh.window.factor = 0.8 11:50:08 policy-pap | sasl.login.refresh.window.jitter = 0.05 11:50:08 policy-pap | sasl.login.retry.backoff.max.ms = 10000 11:50:08 policy-pap | sasl.login.retry.backoff.ms = 100 11:50:08 policy-pap | sasl.mechanism = GSSAPI 11:50:08 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 11:50:08 policy-pap | sasl.oauthbearer.expected.audience = null 11:50:08 policy-pap | sasl.oauthbearer.expected.issuer = null 11:50:08 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 11:50:08 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 11:50:08 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 11:50:08 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 11:50:08 policy-pap | sasl.oauthbearer.scope.claim.name = scope 11:50:08 policy-pap | sasl.oauthbearer.sub.claim.name = sub 11:50:08 policy-pap | sasl.oauthbearer.token.endpoint.url = null 11:50:08 policy-pap | security.protocol = PLAINTEXT 11:50:08 policy-pap | security.providers = null 11:50:08 policy-pap | send.buffer.bytes = 131072 11:50:08 policy-pap | session.timeout.ms = 45000 11:50:08 policy-pap | socket.connection.setup.timeout.max.ms = 30000 11:50:08 policy-pap | socket.connection.setup.timeout.ms = 10000 11:50:08 policy-pap | ssl.cipher.suites = null 11:50:08 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 11:50:08 policy-pap | ssl.endpoint.identification.algorithm = https 11:50:08 policy-pap | ssl.engine.factory.class = null 11:50:08 policy-pap | ssl.key.password = null 11:50:08 policy-pap | ssl.keymanager.algorithm = SunX509 11:50:08 policy-pap | ssl.keystore.certificate.chain = null 11:50:08 policy-pap | ssl.keystore.key = null 11:50:08 policy-pap | ssl.keystore.location = null 11:50:08 policy-pap | ssl.keystore.password = null 11:50:08 policy-pap | ssl.keystore.type = JKS 11:50:08 policy-pap | ssl.protocol = TLSv1.3 11:50:08 policy-pap | ssl.provider = null 11:50:08 policy-pap | ssl.secure.random.implementation = null 11:50:08 policy-pap | ssl.trustmanager.algorithm = PKIX 11:50:08 policy-pap | ssl.truststore.certificates = null 11:50:08 policy-pap | ssl.truststore.location = null 11:50:08 policy-pap | ssl.truststore.password = null 11:50:08 policy-pap | ssl.truststore.type = JKS 11:50:08 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 11:50:08 policy-pap | 11:50:08 policy-pap | [2024-02-21T11:48:11.137+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 11:50:08 policy-pap | [2024-02-21T11:48:11.137+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 11:50:08 policy-pap | [2024-02-21T11:48:11.138+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1708516091137 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_policyTypesName FOREIGN KEY (policyTypesName, policyTypesVersion) REFERENCES toscapolicytypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | > upgrade 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_relationshipTypesName FOREIGN KEY (relationshipTypesName, relationshipTypesVersion) REFERENCES toscarelationshiptypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | > upgrade 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | ALTER TABLE toscatopologytemplate ADD CONSTRAINT FK_ToscaTopologyTemplate_nodeTemplatesName FOREIGN KEY (nodeTemplatesName, nodeTemplatessVersion) REFERENCES toscanodetemplates (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | > upgrade 1050-FK_ToscaTopologyTemplate_policyName.sql 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | ALTER TABLE toscatopologytemplate ADD CONSTRAINT FK_ToscaTopologyTemplate_policyName FOREIGN KEY (policyName, policyVersion) REFERENCES toscapolicies (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | > upgrade 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql 11:50:08 policy-db-migrator | -------------- 11:50:08 kafka | [2024-02-21 11:48:11,861] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,861] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,861] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,861] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,861] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,861] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,861] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,861] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,861] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,861] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:50:08 policy-pap | [2024-02-21T11:48:11.138+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap 11:50:08 policy-pap | [2024-02-21T11:48:11.138+00:00|INFO|ServiceManager|main] Policy PAP starting topics 11:50:08 policy-pap | [2024-02-21T11:48:11.138+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=b05e8157-faba-4694-b27b-3d12a76e3107, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-heartbeat,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 11:50:08 policy-pap | [2024-02-21T11:48:11.138+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=deda7b7f-e78f-4ba0-9889-983a637c2ccd, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 11:50:08 policy-pap | [2024-02-21T11:48:11.138+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=12465c55-4625-4ba3-b00d-50849cf599b4, alive=false, publisher=null]]: starting 11:50:08 policy-pap | [2024-02-21T11:48:11.154+00:00|INFO|ProducerConfig|main] ProducerConfig values: 11:50:08 policy-pap | acks = -1 11:50:08 policy-pap | auto.include.jmx.reporter = true 11:50:08 policy-pap | batch.size = 16384 11:50:08 policy-pap | bootstrap.servers = [kafka:9092] 11:50:08 policy-pap | buffer.memory = 33554432 11:50:08 policy-pap | client.dns.lookup = use_all_dns_ips 11:50:08 policy-pap | client.id = producer-1 11:50:08 policy-pap | compression.type = none 11:50:08 policy-pap | connections.max.idle.ms = 540000 11:50:08 policy-pap | delivery.timeout.ms = 120000 11:50:08 policy-pap | enable.idempotence = true 11:50:08 policy-pap | interceptor.classes = [] 11:50:08 policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer 11:50:08 policy-pap | linger.ms = 0 11:50:08 policy-pap | max.block.ms = 60000 11:50:08 policy-pap | max.in.flight.requests.per.connection = 5 11:50:08 policy-pap | max.request.size = 1048576 11:50:08 policy-pap | metadata.max.age.ms = 300000 11:50:08 policy-pap | metadata.max.idle.ms = 300000 11:50:08 policy-pap | metric.reporters = [] 11:50:08 policy-pap | metrics.num.samples = 2 11:50:08 policy-pap | metrics.recording.level = INFO 11:50:08 policy-pap | metrics.sample.window.ms = 30000 11:50:08 policy-pap | partitioner.adaptive.partitioning.enable = true 11:50:08 policy-pap | partitioner.availability.timeout.ms = 0 11:50:08 policy-pap | partitioner.class = null 11:50:08 policy-pap | partitioner.ignore.keys = false 11:50:08 policy-pap | receive.buffer.bytes = 32768 11:50:08 policy-pap | reconnect.backoff.max.ms = 1000 11:50:08 policy-pap | reconnect.backoff.ms = 50 11:50:08 policy-pap | request.timeout.ms = 30000 11:50:08 policy-pap | retries = 2147483647 11:50:08 policy-pap | retry.backoff.ms = 100 11:50:08 policy-pap | sasl.client.callback.handler.class = null 11:50:08 policy-pap | sasl.jaas.config = null 11:50:08 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 11:50:08 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 11:50:08 policy-pap | sasl.kerberos.service.name = null 11:50:08 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 11:50:08 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 11:50:08 policy-pap | sasl.login.callback.handler.class = null 11:50:08 policy-pap | sasl.login.class = null 11:50:08 policy-pap | sasl.login.connect.timeout.ms = null 11:50:08 policy-pap | sasl.login.read.timeout.ms = null 11:50:08 policy-pap | sasl.login.refresh.buffer.seconds = 300 11:50:08 policy-pap | sasl.login.refresh.min.period.seconds = 60 11:50:08 policy-pap | sasl.login.refresh.window.factor = 0.8 11:50:08 policy-pap | sasl.login.refresh.window.jitter = 0.05 11:50:08 policy-pap | sasl.login.retry.backoff.max.ms = 10000 11:50:08 policy-pap | sasl.login.retry.backoff.ms = 100 11:50:08 policy-pap | sasl.mechanism = GSSAPI 11:50:08 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 11:50:08 policy-pap | sasl.oauthbearer.expected.audience = null 11:50:08 policy-pap | sasl.oauthbearer.expected.issuer = null 11:50:08 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 11:50:08 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 11:50:08 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 11:50:08 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 11:50:08 policy-pap | sasl.oauthbearer.scope.claim.name = scope 11:50:08 policy-pap | sasl.oauthbearer.sub.claim.name = sub 11:50:08 policy-pap | sasl.oauthbearer.token.endpoint.url = null 11:50:08 policy-pap | security.protocol = PLAINTEXT 11:50:08 policy-pap | security.providers = null 11:50:08 policy-pap | send.buffer.bytes = 131072 11:50:08 policy-pap | socket.connection.setup.timeout.max.ms = 30000 11:50:08 policy-pap | socket.connection.setup.timeout.ms = 10000 11:50:08 policy-pap | ssl.cipher.suites = null 11:50:08 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 11:50:08 kafka | [2024-02-21 11:48:11,861] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,861] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,861] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,861] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,861] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,861] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,861] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,861] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,861] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,862] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,862] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,862] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,862] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,862] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,862] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,909] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-3 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,909] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-18 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,909] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-41 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,909] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-10 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,909] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-33 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,909] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-48 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,909] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-19 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,909] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-34 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,909] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-4 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,909] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-11 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,909] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-26 (state.change.logger) 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.55648431Z level=info msg="Executing migration" id="create alert_rule_version table" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.557979265Z level=info msg="Migration successfully executed" id="create alert_rule_version table" duration=1.489805ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.566420432Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.568068569Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=1.647197ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.577408355Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.579134494Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" duration=1.725569ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.583260486Z level=info msg="Executing migration" id="alter alert_rule_version table data column to mediumtext in mysql" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.583354968Z level=info msg="Migration successfully executed" id="alter alert_rule_version table data column to mediumtext in mysql" duration=95.502µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.586951904Z level=info msg="Executing migration" id="add column for to alert_rule_version" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.597052048Z level=info msg="Migration successfully executed" id="add column for to alert_rule_version" duration=10.095534ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.604274263Z level=info msg="Executing migration" id="add column annotations to alert_rule_version" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.60978859Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule_version" duration=5.509427ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.614438559Z level=info msg="Executing migration" id="add column labels to alert_rule_version" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.620991436Z level=info msg="Migration successfully executed" id="add column labels to alert_rule_version" duration=6.553438ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.624588323Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule_version" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.631116071Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule_version" duration=6.527228ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.637606128Z level=info msg="Executing migration" id="add is_paused column to alert_rule_versions table" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.64368104Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule_versions table" duration=6.074582ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.647232817Z level=info msg="Executing migration" id="fix is_paused column for alert_rule_version table" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.647295467Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule_version table" duration=63.48µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.6504979Z level=info msg="Executing migration" id=create_alert_configuration_table 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.651227368Z level=info msg="Migration successfully executed" id=create_alert_configuration_table duration=728.998µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.654630863Z level=info msg="Executing migration" id="Add column default in alert_configuration" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.664159081Z level=info msg="Migration successfully executed" id="Add column default in alert_configuration" duration=9.529228ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.671286785Z level=info msg="Executing migration" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.671347856Z level=info msg="Migration successfully executed" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" duration=61.331µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.675363327Z level=info msg="Executing migration" id="add column org_id in alert_configuration" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.683103897Z level=info msg="Migration successfully executed" id="add column org_id in alert_configuration" duration=7.74374ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.686410691Z level=info msg="Executing migration" id="add index in alert_configuration table on org_id column" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.687485463Z level=info msg="Migration successfully executed" id="add index in alert_configuration table on org_id column" duration=1.069422ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.693391424Z level=info msg="Executing migration" id="add configuration_hash column to alert_configuration" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.699650998Z level=info msg="Migration successfully executed" id="add configuration_hash column to alert_configuration" duration=6.259174ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.707979534Z level=info msg="Executing migration" id=create_ngalert_configuration_table 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.70948444Z level=info msg="Migration successfully executed" id=create_ngalert_configuration_table duration=1.504386ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.714591002Z level=info msg="Executing migration" id="add index in ngalert_configuration on org_id column" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.71630189Z level=info msg="Migration successfully executed" id="add index in ngalert_configuration on org_id column" duration=1.711438ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.722909678Z level=info msg="Executing migration" id="add column send_alerts_to in ngalert_configuration" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.732932662Z level=info msg="Migration successfully executed" id="add column send_alerts_to in ngalert_configuration" duration=10.027044ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.737208356Z level=info msg="Executing migration" id="create provenance_type table" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.737815623Z level=info msg="Migration successfully executed" id="create provenance_type table" duration=603.967µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.741270328Z level=info msg="Executing migration" id="add index to uniquify (record_key, record_type, org_id) columns" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.742336229Z level=info msg="Migration successfully executed" id="add index to uniquify (record_key, record_type, org_id) columns" duration=1.066021ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.75305973Z level=info msg="Executing migration" id="create alert_image table" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.754340243Z level=info msg="Migration successfully executed" id="create alert_image table" duration=1.281463ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.760212024Z level=info msg="Executing migration" id="add unique index on token to alert_image table" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.762025662Z level=info msg="Migration successfully executed" id="add unique index on token to alert_image table" duration=1.813828ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.765798292Z level=info msg="Executing migration" id="support longer URLs in alert_image table" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.765955913Z level=info msg="Migration successfully executed" id="support longer URLs in alert_image table" duration=151.991µs 11:50:08 kafka | [2024-02-21 11:48:11,909] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-49 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,909] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-39 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,909] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-9 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,909] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-24 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,909] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-31 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,909] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-46 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,909] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-1 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,909] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-16 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,909] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-2 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,909] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-25 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,909] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-40 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,909] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-47 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,909] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-17 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,910] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-32 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,910] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-37 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,910] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-7 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,910] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-22 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,910] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-29 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,910] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-44 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,910] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-14 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,910] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-23 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,910] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-38 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,910] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-8 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,910] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,910] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-45 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,910] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-15 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,910] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-30 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,910] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-0 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,910] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-35 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,910] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-5 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,910] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-20 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,910] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-27 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,910] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-42 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,910] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-12 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,910] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-21 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,910] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-36 (state.change.logger) 11:50:08 policy-pap | ssl.endpoint.identification.algorithm = https 11:50:08 policy-pap | ssl.engine.factory.class = null 11:50:08 policy-pap | ssl.key.password = null 11:50:08 policy-pap | ssl.keymanager.algorithm = SunX509 11:50:08 policy-pap | ssl.keystore.certificate.chain = null 11:50:08 policy-pap | ssl.keystore.key = null 11:50:08 policy-pap | ssl.keystore.location = null 11:50:08 policy-pap | ssl.keystore.password = null 11:50:08 policy-pap | ssl.keystore.type = JKS 11:50:08 policy-pap | ssl.protocol = TLSv1.3 11:50:08 policy-pap | ssl.provider = null 11:50:08 policy-pap | ssl.secure.random.implementation = null 11:50:08 policy-pap | ssl.trustmanager.algorithm = PKIX 11:50:08 policy-pap | ssl.truststore.certificates = null 11:50:08 policy-pap | ssl.truststore.location = null 11:50:08 policy-pap | ssl.truststore.password = null 11:50:08 policy-pap | ssl.truststore.type = JKS 11:50:08 policy-pap | transaction.timeout.ms = 60000 11:50:08 policy-pap | transactional.id = null 11:50:08 policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer 11:50:08 policy-pap | 11:50:08 policy-pap | [2024-02-21T11:48:11.165+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. 11:50:08 policy-pap | [2024-02-21T11:48:11.181+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 11:50:08 policy-pap | [2024-02-21T11:48:11.181+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 11:50:08 policy-pap | [2024-02-21T11:48:11.181+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1708516091181 11:50:08 policy-pap | [2024-02-21T11:48:11.181+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=12465c55-4625-4ba3-b00d-50849cf599b4, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created 11:50:08 policy-pap | [2024-02-21T11:48:11.181+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=c7ba1d9e-3dfb-416c-868f-3a98f1d4e868, alive=false, publisher=null]]: starting 11:50:08 policy-pap | [2024-02-21T11:48:11.182+00:00|INFO|ProducerConfig|main] ProducerConfig values: 11:50:08 policy-pap | acks = -1 11:50:08 policy-pap | auto.include.jmx.reporter = true 11:50:08 policy-pap | batch.size = 16384 11:50:08 policy-pap | bootstrap.servers = [kafka:9092] 11:50:08 policy-pap | buffer.memory = 33554432 11:50:08 policy-pap | client.dns.lookup = use_all_dns_ips 11:50:08 policy-pap | client.id = producer-2 11:50:08 policy-pap | compression.type = none 11:50:08 policy-pap | connections.max.idle.ms = 540000 11:50:08 policy-pap | delivery.timeout.ms = 120000 11:50:08 policy-pap | enable.idempotence = true 11:50:08 policy-pap | interceptor.classes = [] 11:50:08 policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer 11:50:08 policy-pap | linger.ms = 0 11:50:08 policy-pap | max.block.ms = 60000 11:50:08 policy-pap | max.in.flight.requests.per.connection = 5 11:50:08 policy-pap | max.request.size = 1048576 11:50:08 policy-pap | metadata.max.age.ms = 300000 11:50:08 policy-pap | metadata.max.idle.ms = 300000 11:50:08 policy-pap | metric.reporters = [] 11:50:08 policy-pap | metrics.num.samples = 2 11:50:08 policy-pap | metrics.recording.level = INFO 11:50:08 policy-pap | metrics.sample.window.ms = 30000 11:50:08 policy-pap | partitioner.adaptive.partitioning.enable = true 11:50:08 policy-pap | partitioner.availability.timeout.ms = 0 11:50:08 policy-pap | partitioner.class = null 11:50:08 policy-pap | partitioner.ignore.keys = false 11:50:08 policy-pap | receive.buffer.bytes = 32768 11:50:08 policy-pap | reconnect.backoff.max.ms = 1000 11:50:08 policy-pap | reconnect.backoff.ms = 50 11:50:08 policy-pap | request.timeout.ms = 30000 11:50:08 policy-pap | retries = 2147483647 11:50:08 policy-pap | retry.backoff.ms = 100 11:50:08 policy-pap | sasl.client.callback.handler.class = null 11:50:08 policy-pap | sasl.jaas.config = null 11:50:08 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 11:50:08 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 11:50:08 policy-pap | sasl.kerberos.service.name = null 11:50:08 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 11:50:08 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 11:50:08 policy-pap | sasl.login.callback.handler.class = null 11:50:08 policy-pap | sasl.login.class = null 11:50:08 policy-pap | sasl.login.connect.timeout.ms = null 11:50:08 policy-pap | sasl.login.read.timeout.ms = null 11:50:08 policy-pap | sasl.login.refresh.buffer.seconds = 300 11:50:08 policy-pap | sasl.login.refresh.min.period.seconds = 60 11:50:08 policy-pap | sasl.login.refresh.window.factor = 0.8 11:50:08 policy-pap | sasl.login.refresh.window.jitter = 0.05 11:50:08 policy-pap | sasl.login.retry.backoff.max.ms = 10000 11:50:08 policy-pap | sasl.login.retry.backoff.ms = 100 11:50:08 policy-pap | sasl.mechanism = GSSAPI 11:50:08 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 11:50:08 policy-pap | sasl.oauthbearer.expected.audience = null 11:50:08 policy-pap | sasl.oauthbearer.expected.issuer = null 11:50:08 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 11:50:08 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 11:50:08 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 11:50:08 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 11:50:08 policy-pap | sasl.oauthbearer.scope.claim.name = scope 11:50:08 policy-pap | sasl.oauthbearer.sub.claim.name = sub 11:50:08 policy-pap | sasl.oauthbearer.token.endpoint.url = null 11:50:08 policy-pap | security.protocol = PLAINTEXT 11:50:08 policy-pap | security.providers = null 11:50:08 policy-pap | send.buffer.bytes = 131072 11:50:08 policy-pap | socket.connection.setup.timeout.max.ms = 30000 11:50:08 policy-pap | socket.connection.setup.timeout.ms = 10000 11:50:08 policy-pap | ssl.cipher.suites = null 11:50:08 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 11:50:08 policy-pap | ssl.endpoint.identification.algorithm = https 11:50:08 policy-pap | ssl.engine.factory.class = null 11:50:08 policy-pap | ssl.key.password = null 11:50:08 policy-pap | ssl.keymanager.algorithm = SunX509 11:50:08 policy-pap | ssl.keystore.certificate.chain = null 11:50:08 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT TscaServiceTemplatetopologyTemplateParentLocalName FOREIGN KEY (topologyTemplateParentLocalName, topologyTemplateLocalName, topologyTemplateParentKeyVersion, topologyTemplateParentKeyName) REFERENCES toscatopologytemplate (parentLocalName, localName, parentKeyVersion, parentKeyName) ON UPDATE RESTRICT ON DELETE RESTRICT 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | > upgrade 0100-pdp.sql 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | ALTER TABLE pdp ADD COLUMN LASTUPDATE datetime NOT NULL DEFAULT CURRENT_TIMESTAMP AFTER HEALTHY 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | > upgrade 0110-idx_tsidx1.sql 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | CREATE INDEX IDX_TSIDX1 ON pdpstatistics(timeStamp, name, version) 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | > upgrade 0120-pk_pdpstatistics.sql 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | ALTER TABLE pdpstatistics DROP PRIMARY KEY 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | > upgrade 0130-pdpstatistics.sql 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | ALTER TABLE pdpstatistics ADD COLUMN POLICYUNDEPLOYCOUNT BIGINT DEFAULT NULL AFTER POLICYEXECUTEDSUCCESSCOUNT, ADD COLUMN POLICYUNDEPLOYFAILCOUNT BIGINT DEFAULT NULL, ADD COLUMN POLICYUNDEPLOYSUCCESSCOUNT BIGINT DEFAULT NULL, ADD COLUMN ID BIGINT NOT NULL 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | > upgrade 0140-pk_pdpstatistics.sql 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | UPDATE pdpstatistics as p JOIN (SELECT name, version, timeStamp, ROW_NUMBER() OVER (ORDER BY timeStamp ASC) AS row_num FROM pdpstatistics GROUP BY name, version, timeStamp) AS t ON (p.name=t.name AND p.version=t.version AND p.timeStamp = t.timeStamp) SET p.id=t.row_num 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | ALTER TABLE pdpstatistics ADD CONSTRAINT PK_PDPSTATISTICS PRIMARY KEY (ID, name, version) 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | > upgrade 0150-pdpstatistics.sql 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | ALTER TABLE pdpstatistics MODIFY COLUMN timeStamp datetime(6) NULL 11:50:08 kafka | [2024-02-21 11:48:11,910] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-6 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,910] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-43 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,910] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-13 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,910] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-28 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,911] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions HashSet(__consumer_offsets-22, __consumer_offsets-30, __consumer_offsets-25, __consumer_offsets-35, __consumer_offsets-38, __consumer_offsets-13, __consumer_offsets-8, __consumer_offsets-21, __consumer_offsets-4, __consumer_offsets-27, __consumer_offsets-7, __consumer_offsets-9, __consumer_offsets-46, __consumer_offsets-41, __consumer_offsets-33, __consumer_offsets-23, __consumer_offsets-49, __consumer_offsets-47, __consumer_offsets-16, __consumer_offsets-28, __consumer_offsets-31, __consumer_offsets-36, __consumer_offsets-42, __consumer_offsets-3, __consumer_offsets-18, __consumer_offsets-37, policy-pdp-pap-0, __consumer_offsets-15, __consumer_offsets-24, __consumer_offsets-17, __consumer_offsets-48, __consumer_offsets-19, __consumer_offsets-11, __consumer_offsets-2, __consumer_offsets-43, __consumer_offsets-6, __consumer_offsets-14, __consumer_offsets-20, __consumer_offsets-0, __consumer_offsets-44, __consumer_offsets-39, __consumer_offsets-12, __consumer_offsets-45, __consumer_offsets-1, __consumer_offsets-5, __consumer_offsets-26, __consumer_offsets-29, __consumer_offsets-34, __consumer_offsets-10, __consumer_offsets-32, __consumer_offsets-40) (kafka.server.ReplicaFetcherManager) 11:50:08 kafka | [2024-02-21 11:48:11,911] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 1 from controller 1 epoch 1 as part of the become-leader transition for 51 partitions (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:11,976] INFO [LogLoader partition=__consumer_offsets-3, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:50:08 kafka | [2024-02-21 11:48:11,987] INFO Created log for partition __consumer_offsets-3 in /var/lib/kafka/data/__consumer_offsets-3 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:50:08 kafka | [2024-02-21 11:48:11,989] INFO [Partition __consumer_offsets-3 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-3 (kafka.cluster.Partition) 11:50:08 kafka | [2024-02-21 11:48:11,990] INFO [Partition __consumer_offsets-3 broker=1] Log loaded for partition __consumer_offsets-3 with initial high watermark 0 (kafka.cluster.Partition) 11:50:08 kafka | [2024-02-21 11:48:11,991] INFO [Broker id=1] Leader __consumer_offsets-3 with topic id Some(Kd_bSyAnRqaVsDMHCxs5MA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,001] INFO [LogLoader partition=__consumer_offsets-18, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:50:08 kafka | [2024-02-21 11:48:12,001] INFO Created log for partition __consumer_offsets-18 in /var/lib/kafka/data/__consumer_offsets-18 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:50:08 kafka | [2024-02-21 11:48:12,001] INFO [Partition __consumer_offsets-18 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-18 (kafka.cluster.Partition) 11:50:08 kafka | [2024-02-21 11:48:12,001] INFO [Partition __consumer_offsets-18 broker=1] Log loaded for partition __consumer_offsets-18 with initial high watermark 0 (kafka.cluster.Partition) 11:50:08 kafka | [2024-02-21 11:48:12,001] INFO [Broker id=1] Leader __consumer_offsets-18 with topic id Some(Kd_bSyAnRqaVsDMHCxs5MA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,010] INFO [LogLoader partition=__consumer_offsets-41, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:50:08 kafka | [2024-02-21 11:48:12,011] INFO Created log for partition __consumer_offsets-41 in /var/lib/kafka/data/__consumer_offsets-41 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:50:08 kafka | [2024-02-21 11:48:12,011] INFO [Partition __consumer_offsets-41 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-41 (kafka.cluster.Partition) 11:50:08 kafka | [2024-02-21 11:48:12,011] INFO [Partition __consumer_offsets-41 broker=1] Log loaded for partition __consumer_offsets-41 with initial high watermark 0 (kafka.cluster.Partition) 11:50:08 kafka | [2024-02-21 11:48:12,011] INFO [Broker id=1] Leader __consumer_offsets-41 with topic id Some(Kd_bSyAnRqaVsDMHCxs5MA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,021] INFO [LogLoader partition=__consumer_offsets-10, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:50:08 kafka | [2024-02-21 11:48:12,023] INFO Created log for partition __consumer_offsets-10 in /var/lib/kafka/data/__consumer_offsets-10 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:50:08 kafka | [2024-02-21 11:48:12,023] INFO [Partition __consumer_offsets-10 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-10 (kafka.cluster.Partition) 11:50:08 kafka | [2024-02-21 11:48:12,023] INFO [Partition __consumer_offsets-10 broker=1] Log loaded for partition __consumer_offsets-10 with initial high watermark 0 (kafka.cluster.Partition) 11:50:08 kafka | [2024-02-21 11:48:12,024] INFO [Broker id=1] Leader __consumer_offsets-10 with topic id Some(Kd_bSyAnRqaVsDMHCxs5MA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,033] INFO [LogLoader partition=__consumer_offsets-33, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:50:08 kafka | [2024-02-21 11:48:12,034] INFO Created log for partition __consumer_offsets-33 in /var/lib/kafka/data/__consumer_offsets-33 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:50:08 kafka | [2024-02-21 11:48:12,034] INFO [Partition __consumer_offsets-33 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-33 (kafka.cluster.Partition) 11:50:08 kafka | [2024-02-21 11:48:12,034] INFO [Partition __consumer_offsets-33 broker=1] Log loaded for partition __consumer_offsets-33 with initial high watermark 0 (kafka.cluster.Partition) 11:50:08 policy-pap | ssl.keystore.key = null 11:50:08 policy-pap | ssl.keystore.location = null 11:50:08 policy-pap | ssl.keystore.password = null 11:50:08 policy-pap | ssl.keystore.type = JKS 11:50:08 policy-pap | ssl.protocol = TLSv1.3 11:50:08 policy-pap | ssl.provider = null 11:50:08 policy-pap | ssl.secure.random.implementation = null 11:50:08 policy-pap | ssl.trustmanager.algorithm = PKIX 11:50:08 policy-pap | ssl.truststore.certificates = null 11:50:08 policy-pap | ssl.truststore.location = null 11:50:08 policy-pap | ssl.truststore.password = null 11:50:08 policy-pap | ssl.truststore.type = JKS 11:50:08 policy-pap | transaction.timeout.ms = 60000 11:50:08 policy-pap | transactional.id = null 11:50:08 policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer 11:50:08 policy-pap | 11:50:08 policy-pap | [2024-02-21T11:48:11.183+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-2] Instantiated an idempotent producer. 11:50:08 policy-pap | [2024-02-21T11:48:11.185+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 11:50:08 policy-pap | [2024-02-21T11:48:11.185+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 11:50:08 policy-pap | [2024-02-21T11:48:11.186+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1708516091185 11:50:08 policy-pap | [2024-02-21T11:48:11.186+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=c7ba1d9e-3dfb-416c-868f-3a98f1d4e868, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created 11:50:08 policy-pap | [2024-02-21T11:48:11.186+00:00|INFO|ServiceManager|main] Policy PAP starting PAP Activator 11:50:08 policy-pap | [2024-02-21T11:48:11.186+00:00|INFO|ServiceManager|main] Policy PAP starting PDP publisher 11:50:08 policy-pap | [2024-02-21T11:48:11.188+00:00|INFO|ServiceManager|main] Policy PAP starting Policy Notification publisher 11:50:08 policy-pap | [2024-02-21T11:48:11.188+00:00|INFO|ServiceManager|main] Policy PAP starting PDP update timers 11:50:08 policy-pap | [2024-02-21T11:48:11.190+00:00|INFO|ServiceManager|main] Policy PAP starting PDP state-change timers 11:50:08 policy-pap | [2024-02-21T11:48:11.190+00:00|INFO|TimerManager|Thread-9] timer manager update started 11:50:08 policy-pap | [2024-02-21T11:48:11.190+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification lock 11:50:08 policy-pap | [2024-02-21T11:48:11.190+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification requests 11:50:08 policy-pap | [2024-02-21T11:48:11.190+00:00|INFO|TimerManager|Thread-10] timer manager state-change started 11:50:08 policy-pap | [2024-02-21T11:48:11.191+00:00|INFO|ServiceManager|main] Policy PAP starting PDP expiration timer 11:50:08 policy-pap | [2024-02-21T11:48:11.192+00:00|INFO|ServiceManager|main] Policy PAP started 11:50:08 policy-pap | [2024-02-21T11:48:11.193+00:00|INFO|PolicyPapApplication|main] Started PolicyPapApplication in 10.877 seconds (process running for 11.558) 11:50:08 policy-pap | [2024-02-21T11:48:11.621+00:00|INFO|Metadata|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] Cluster ID: NROpzKGmRGeJsBLulqXClg 11:50:08 policy-pap | [2024-02-21T11:48:11.621+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 2 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} 11:50:08 policy-pap | [2024-02-21T11:48:11.622+00:00|INFO|Metadata|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Cluster ID: NROpzKGmRGeJsBLulqXClg 11:50:08 policy-pap | [2024-02-21T11:48:11.622+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: NROpzKGmRGeJsBLulqXClg 11:50:08 policy-pap | [2024-02-21T11:48:11.664+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-deda7b7f-e78f-4ba0-9889-983a637c2ccd-3, groupId=deda7b7f-e78f-4ba0-9889-983a637c2ccd] Error while fetching metadata with correlation id 2 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 11:50:08 policy-pap | [2024-02-21T11:48:11.664+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-deda7b7f-e78f-4ba0-9889-983a637c2ccd-3, groupId=deda7b7f-e78f-4ba0-9889-983a637c2ccd] Cluster ID: NROpzKGmRGeJsBLulqXClg 11:50:08 policy-pap | [2024-02-21T11:48:11.743+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 4 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 11:50:08 policy-pap | [2024-02-21T11:48:11.754+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] ProducerId set to 1 with epoch 0 11:50:08 policy-pap | [2024-02-21T11:48:11.757+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 0 with epoch 0 11:50:08 policy-pap | [2024-02-21T11:48:11.810+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-deda7b7f-e78f-4ba0-9889-983a637c2ccd-3, groupId=deda7b7f-e78f-4ba0-9889-983a637c2ccd] Error while fetching metadata with correlation id 4 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 11:50:08 policy-pap | [2024-02-21T11:48:11.863+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 6 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 11:50:08 kafka | [2024-02-21 11:48:12,034] INFO [Broker id=1] Leader __consumer_offsets-33 with topic id Some(Kd_bSyAnRqaVsDMHCxs5MA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,045] INFO [LogLoader partition=__consumer_offsets-48, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:50:08 kafka | [2024-02-21 11:48:12,046] INFO Created log for partition __consumer_offsets-48 in /var/lib/kafka/data/__consumer_offsets-48 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:50:08 kafka | [2024-02-21 11:48:12,046] INFO [Partition __consumer_offsets-48 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-48 (kafka.cluster.Partition) 11:50:08 kafka | [2024-02-21 11:48:12,046] INFO [Partition __consumer_offsets-48 broker=1] Log loaded for partition __consumer_offsets-48 with initial high watermark 0 (kafka.cluster.Partition) 11:50:08 kafka | [2024-02-21 11:48:12,046] INFO [Broker id=1] Leader __consumer_offsets-48 with topic id Some(Kd_bSyAnRqaVsDMHCxs5MA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,056] INFO [LogLoader partition=__consumer_offsets-19, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:50:08 kafka | [2024-02-21 11:48:12,060] INFO Created log for partition __consumer_offsets-19 in /var/lib/kafka/data/__consumer_offsets-19 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:50:08 kafka | [2024-02-21 11:48:12,060] INFO [Partition __consumer_offsets-19 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-19 (kafka.cluster.Partition) 11:50:08 kafka | [2024-02-21 11:48:12,060] INFO [Partition __consumer_offsets-19 broker=1] Log loaded for partition __consumer_offsets-19 with initial high watermark 0 (kafka.cluster.Partition) 11:50:08 kafka | [2024-02-21 11:48:12,061] INFO [Broker id=1] Leader __consumer_offsets-19 with topic id Some(Kd_bSyAnRqaVsDMHCxs5MA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,070] INFO [LogLoader partition=__consumer_offsets-34, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:50:08 kafka | [2024-02-21 11:48:12,071] INFO Created log for partition __consumer_offsets-34 in /var/lib/kafka/data/__consumer_offsets-34 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:50:08 kafka | [2024-02-21 11:48:12,071] INFO [Partition __consumer_offsets-34 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-34 (kafka.cluster.Partition) 11:50:08 kafka | [2024-02-21 11:48:12,071] INFO [Partition __consumer_offsets-34 broker=1] Log loaded for partition __consumer_offsets-34 with initial high watermark 0 (kafka.cluster.Partition) 11:50:08 kafka | [2024-02-21 11:48:12,071] INFO [Broker id=1] Leader __consumer_offsets-34 with topic id Some(Kd_bSyAnRqaVsDMHCxs5MA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,084] INFO [LogLoader partition=__consumer_offsets-4, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:50:08 kafka | [2024-02-21 11:48:12,088] INFO Created log for partition __consumer_offsets-4 in /var/lib/kafka/data/__consumer_offsets-4 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:50:08 kafka | [2024-02-21 11:48:12,088] INFO [Partition __consumer_offsets-4 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-4 (kafka.cluster.Partition) 11:50:08 kafka | [2024-02-21 11:48:12,089] INFO [Partition __consumer_offsets-4 broker=1] Log loaded for partition __consumer_offsets-4 with initial high watermark 0 (kafka.cluster.Partition) 11:50:08 kafka | [2024-02-21 11:48:12,089] INFO [Broker id=1] Leader __consumer_offsets-4 with topic id Some(Kd_bSyAnRqaVsDMHCxs5MA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,097] INFO [LogLoader partition=__consumer_offsets-11, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:50:08 kafka | [2024-02-21 11:48:12,097] INFO Created log for partition __consumer_offsets-11 in /var/lib/kafka/data/__consumer_offsets-11 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:50:08 kafka | [2024-02-21 11:48:12,097] INFO [Partition __consumer_offsets-11 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-11 (kafka.cluster.Partition) 11:50:08 kafka | [2024-02-21 11:48:12,097] INFO [Partition __consumer_offsets-11 broker=1] Log loaded for partition __consumer_offsets-11 with initial high watermark 0 (kafka.cluster.Partition) 11:50:08 kafka | [2024-02-21 11:48:12,097] INFO [Broker id=1] Leader __consumer_offsets-11 with topic id Some(Kd_bSyAnRqaVsDMHCxs5MA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,103] INFO [LogLoader partition=__consumer_offsets-26, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:50:08 kafka | [2024-02-21 11:48:12,103] INFO Created log for partition __consumer_offsets-26 in /var/lib/kafka/data/__consumer_offsets-26 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:50:08 kafka | [2024-02-21 11:48:12,103] INFO [Partition __consumer_offsets-26 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-26 (kafka.cluster.Partition) 11:50:08 kafka | [2024-02-21 11:48:12,103] INFO [Partition __consumer_offsets-26 broker=1] Log loaded for partition __consumer_offsets-26 with initial high watermark 0 (kafka.cluster.Partition) 11:50:08 kafka | [2024-02-21 11:48:12,103] INFO [Broker id=1] Leader __consumer_offsets-26 with topic id Some(Kd_bSyAnRqaVsDMHCxs5MA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,112] INFO [LogLoader partition=__consumer_offsets-49, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:50:08 kafka | [2024-02-21 11:48:12,113] INFO Created log for partition __consumer_offsets-49 in /var/lib/kafka/data/__consumer_offsets-49 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:50:08 kafka | [2024-02-21 11:48:12,113] INFO [Partition __consumer_offsets-49 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-49 (kafka.cluster.Partition) 11:50:08 kafka | [2024-02-21 11:48:12,113] INFO [Partition __consumer_offsets-49 broker=1] Log loaded for partition __consumer_offsets-49 with initial high watermark 0 (kafka.cluster.Partition) 11:50:08 policy-pap | [2024-02-21T11:48:11.920+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-deda7b7f-e78f-4ba0-9889-983a637c2ccd-3, groupId=deda7b7f-e78f-4ba0-9889-983a637c2ccd] Error while fetching metadata with correlation id 6 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 11:50:08 policy-pap | [2024-02-21T11:48:11.971+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 8 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 11:50:08 policy-pap | [2024-02-21T11:48:12.025+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-deda7b7f-e78f-4ba0-9889-983a637c2ccd-3, groupId=deda7b7f-e78f-4ba0-9889-983a637c2ccd] Error while fetching metadata with correlation id 8 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 11:50:08 policy-pap | [2024-02-21T11:48:12.085+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 10 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 11:50:08 policy-pap | [2024-02-21T11:48:12.130+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-deda7b7f-e78f-4ba0-9889-983a637c2ccd-3, groupId=deda7b7f-e78f-4ba0-9889-983a637c2ccd] Error while fetching metadata with correlation id 10 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 11:50:08 policy-pap | [2024-02-21T11:48:12.191+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 12 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 11:50:08 policy-pap | [2024-02-21T11:48:12.241+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-deda7b7f-e78f-4ba0-9889-983a637c2ccd-3, groupId=deda7b7f-e78f-4ba0-9889-983a637c2ccd] Error while fetching metadata with correlation id 12 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 11:50:08 policy-pap | [2024-02-21T11:48:12.298+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 14 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 11:50:08 policy-pap | [2024-02-21T11:48:12.345+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-deda7b7f-e78f-4ba0-9889-983a637c2ccd-3, groupId=deda7b7f-e78f-4ba0-9889-983a637c2ccd] Error while fetching metadata with correlation id 14 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 11:50:08 policy-pap | [2024-02-21T11:48:12.404+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 16 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 11:50:08 policy-pap | [2024-02-21T11:48:12.450+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-deda7b7f-e78f-4ba0-9889-983a637c2ccd-3, groupId=deda7b7f-e78f-4ba0-9889-983a637c2ccd] Error while fetching metadata with correlation id 16 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 11:50:08 policy-pap | [2024-02-21T11:48:12.508+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 18 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 11:50:08 policy-pap | [2024-02-21T11:48:12.571+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-deda7b7f-e78f-4ba0-9889-983a637c2ccd-3, groupId=deda7b7f-e78f-4ba0-9889-983a637c2ccd] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) 11:50:08 policy-pap | [2024-02-21T11:48:12.578+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-deda7b7f-e78f-4ba0-9889-983a637c2ccd-3, groupId=deda7b7f-e78f-4ba0-9889-983a637c2ccd] (Re-)joining group 11:50:08 policy-pap | [2024-02-21T11:48:12.606+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-deda7b7f-e78f-4ba0-9889-983a637c2ccd-3, groupId=deda7b7f-e78f-4ba0-9889-983a637c2ccd] Request joining group due to: need to re-join with the given member-id: consumer-deda7b7f-e78f-4ba0-9889-983a637c2ccd-3-f84bcac6-ce3f-43f9-9186-5dad97824e9a 11:50:08 policy-pap | [2024-02-21T11:48:12.606+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-deda7b7f-e78f-4ba0-9889-983a637c2ccd-3, groupId=deda7b7f-e78f-4ba0-9889-983a637c2ccd] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) 11:50:08 policy-pap | [2024-02-21T11:48:12.606+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-deda7b7f-e78f-4ba0-9889-983a637c2ccd-3, groupId=deda7b7f-e78f-4ba0-9889-983a637c2ccd] (Re-)joining group 11:50:08 policy-pap | [2024-02-21T11:48:12.615+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) 11:50:08 policy-pap | [2024-02-21T11:48:12.617+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group 11:50:08 policy-pap | [2024-02-21T11:48:12.621+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: need to re-join with the given member-id: consumer-policy-pap-4-4d170d86-8c1d-467f-973a-032b61efbcdf 11:50:08 policy-pap | [2024-02-21T11:48:12.621+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) 11:50:08 policy-pap | [2024-02-21T11:48:12.621+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group 11:50:08 policy-pap | [2024-02-21T11:48:15.629+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-deda7b7f-e78f-4ba0-9889-983a637c2ccd-3, groupId=deda7b7f-e78f-4ba0-9889-983a637c2ccd] Successfully joined group with generation Generation{generationId=1, memberId='consumer-deda7b7f-e78f-4ba0-9889-983a637c2ccd-3-f84bcac6-ce3f-43f9-9186-5dad97824e9a', protocol='range'} 11:50:08 policy-pap | [2024-02-21T11:48:15.639+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully joined group with generation Generation{generationId=1, memberId='consumer-policy-pap-4-4d170d86-8c1d-467f-973a-032b61efbcdf', protocol='range'} 11:50:08 policy-pap | [2024-02-21T11:48:15.655+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-deda7b7f-e78f-4ba0-9889-983a637c2ccd-3, groupId=deda7b7f-e78f-4ba0-9889-983a637c2ccd] Finished assignment for group at generation 1: {consumer-deda7b7f-e78f-4ba0-9889-983a637c2ccd-3-f84bcac6-ce3f-43f9-9186-5dad97824e9a=Assignment(partitions=[policy-pdp-pap-0])} 11:50:08 policy-pap | [2024-02-21T11:48:15.656+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Finished assignment for group at generation 1: {consumer-policy-pap-4-4d170d86-8c1d-467f-973a-032b61efbcdf=Assignment(partitions=[policy-pdp-pap-0])} 11:50:08 policy-pap | [2024-02-21T11:48:15.701+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully synced group in generation Generation{generationId=1, memberId='consumer-policy-pap-4-4d170d86-8c1d-467f-973a-032b61efbcdf', protocol='range'} 11:50:08 policy-pap | [2024-02-21T11:48:15.702+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | > upgrade 0160-jpapdpstatistics_enginestats.sql 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | ALTER TABLE jpapdpstatistics_enginestats ADD COLUMN ID BIGINT DEFAULT NULL AFTER UPTIME 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | > upgrade 0170-jpapdpstatistics_enginestats.sql 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | UPDATE jpapdpstatistics_enginestats a 11:50:08 policy-db-migrator | JOIN pdpstatistics b 11:50:08 policy-db-migrator | ON a.name = b.name AND a.version = b.version AND a.timeStamp = b.timeStamp 11:50:08 policy-db-migrator | SET a.id = b.id 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | > upgrade 0180-jpapdpstatistics_enginestats.sql 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | ALTER TABLE jpapdpstatistics_enginestats DROP COLUMN timeStamp 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | > upgrade 0190-jpapolicyaudit.sql 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapolicyaudit (ACTION INT DEFAULT NULL, PDPGROUP VARCHAR(255) NULL, PDPTYPE VARCHAR(255) NULL, TIMESTAMP datetime DEFAULT NULL, USER VARCHAR(255) NULL, ID BIGINT NOT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_JPAPOLICYAUDIT (ID, name, version)) 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | > upgrade 0200-JpaPolicyAuditIndex_timestamp.sql 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | CREATE INDEX JpaPolicyAuditIndex_timestamp ON jpapolicyaudit(TIMESTAMP) 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | > upgrade 0210-sequence.sql 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-pap | [2024-02-21T11:48:15.703+00:00|INFO|ConsumerCoordinator|kafka-coordinator-heartbeat-thread | deda7b7f-e78f-4ba0-9889-983a637c2ccd] [Consumer clientId=consumer-deda7b7f-e78f-4ba0-9889-983a637c2ccd-3, groupId=deda7b7f-e78f-4ba0-9889-983a637c2ccd] Successfully synced group in generation Generation{generationId=1, memberId='consumer-deda7b7f-e78f-4ba0-9889-983a637c2ccd-3-f84bcac6-ce3f-43f9-9186-5dad97824e9a', protocol='range'} 11:50:08 policy-pap | [2024-02-21T11:48:15.705+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-deda7b7f-e78f-4ba0-9889-983a637c2ccd-3, groupId=deda7b7f-e78f-4ba0-9889-983a637c2ccd] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) 11:50:08 policy-pap | [2024-02-21T11:48:15.706+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Adding newly assigned partitions: policy-pdp-pap-0 11:50:08 policy-pap | [2024-02-21T11:48:15.706+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-deda7b7f-e78f-4ba0-9889-983a637c2ccd-3, groupId=deda7b7f-e78f-4ba0-9889-983a637c2ccd] Adding newly assigned partitions: policy-pdp-pap-0 11:50:08 policy-pap | [2024-02-21T11:48:15.725+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Found no committed offset for partition policy-pdp-pap-0 11:50:08 policy-pap | [2024-02-21T11:48:15.725+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-deda7b7f-e78f-4ba0-9889-983a637c2ccd-3, groupId=deda7b7f-e78f-4ba0-9889-983a637c2ccd] Found no committed offset for partition policy-pdp-pap-0 11:50:08 policy-pap | [2024-02-21T11:48:15.747+00:00|INFO|SubscriptionState|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. 11:50:08 policy-pap | [2024-02-21T11:48:15.749+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-deda7b7f-e78f-4ba0-9889-983a637c2ccd-3, groupId=deda7b7f-e78f-4ba0-9889-983a637c2ccd] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. 11:50:08 policy-pap | [2024-02-21T11:48:17.064+00:00|INFO|[/policy/pap/v1]|http-nio-6969-exec-4] Initializing Spring DispatcherServlet 'dispatcherServlet' 11:50:08 policy-pap | [2024-02-21T11:48:17.064+00:00|INFO|DispatcherServlet|http-nio-6969-exec-4] Initializing Servlet 'dispatcherServlet' 11:50:08 policy-pap | [2024-02-21T11:48:17.067+00:00|INFO|DispatcherServlet|http-nio-6969-exec-4] Completed initialization in 2 ms 11:50:08 policy-pap | [2024-02-21T11:48:32.838+00:00|INFO|OrderedServiceImpl|KAFKA-source-policy-pdp-pap] ***** OrderedServiceImpl implementers: 11:50:08 policy-pap | [] 11:50:08 policy-pap | [2024-02-21T11:48:32.839+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 11:50:08 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"01d87f90-6308-4c3c-a041-13f1212bcd60","timestampMs":1708516112802,"name":"apex-9c6c3a1f-b52f-49c2-8783-12eb5b1df64f","pdpGroup":"defaultGroup"} 11:50:08 policy-pap | [2024-02-21T11:48:32.839+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 11:50:08 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"01d87f90-6308-4c3c-a041-13f1212bcd60","timestampMs":1708516112802,"name":"apex-9c6c3a1f-b52f-49c2-8783-12eb5b1df64f","pdpGroup":"defaultGroup"} 11:50:08 policy-pap | [2024-02-21T11:48:32.848+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus 11:50:08 policy-pap | [2024-02-21T11:48:32.939+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-9c6c3a1f-b52f-49c2-8783-12eb5b1df64f PdpUpdate starting 11:50:08 policy-pap | [2024-02-21T11:48:32.939+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-9c6c3a1f-b52f-49c2-8783-12eb5b1df64f PdpUpdate starting listener 11:50:08 policy-pap | [2024-02-21T11:48:32.940+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-9c6c3a1f-b52f-49c2-8783-12eb5b1df64f PdpUpdate starting timer 11:50:08 policy-pap | [2024-02-21T11:48:32.940+00:00|INFO|TimerManager|KAFKA-source-policy-heartbeat] update timer registered Timer [name=5d9e63f8-1b57-4cf9-bb44-4b10e7689864, expireMs=1708516142940] 11:50:08 policy-pap | [2024-02-21T11:48:32.942+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-9c6c3a1f-b52f-49c2-8783-12eb5b1df64f PdpUpdate starting enqueue 11:50:08 policy-pap | [2024-02-21T11:48:32.942+00:00|INFO|TimerManager|Thread-9] update timer waiting 29998ms Timer [name=5d9e63f8-1b57-4cf9-bb44-4b10e7689864, expireMs=1708516142940] 11:50:08 policy-pap | [2024-02-21T11:48:32.942+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-9c6c3a1f-b52f-49c2-8783-12eb5b1df64f PdpUpdate started 11:50:08 policy-pap | [2024-02-21T11:48:32.944+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 11:50:08 policy-pap | {"source":"pap-901e61cf-d04a-4979-8ccb-af4a8d6816b5","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"5d9e63f8-1b57-4cf9-bb44-4b10e7689864","timestampMs":1708516112923,"name":"apex-9c6c3a1f-b52f-49c2-8783-12eb5b1df64f","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 11:50:08 policy-pap | [2024-02-21T11:48:32.983+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 11:50:08 policy-pap | {"source":"pap-901e61cf-d04a-4979-8ccb-af4a8d6816b5","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"5d9e63f8-1b57-4cf9-bb44-4b10e7689864","timestampMs":1708516112923,"name":"apex-9c6c3a1f-b52f-49c2-8783-12eb5b1df64f","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 11:50:08 policy-pap | [2024-02-21T11:48:32.983+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 11:50:08 policy-pap | {"source":"pap-901e61cf-d04a-4979-8ccb-af4a8d6816b5","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"5d9e63f8-1b57-4cf9-bb44-4b10e7689864","timestampMs":1708516112923,"name":"apex-9c6c3a1f-b52f-49c2-8783-12eb5b1df64f","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 11:50:08 policy-pap | [2024-02-21T11:48:32.984+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE 11:50:08 policy-pap | [2024-02-21T11:48:32.984+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE 11:50:08 policy-pap | [2024-02-21T11:48:33.003+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 11:50:08 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"27d31384-f318-4271-8ed2-1876fd20c6e1","timestampMs":1708516112987,"name":"apex-9c6c3a1f-b52f-49c2-8783-12eb5b1df64f","pdpGroup":"defaultGroup"} 11:50:08 policy-pap | [2024-02-21T11:48:33.005+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 11:50:08 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"27d31384-f318-4271-8ed2-1876fd20c6e1","timestampMs":1708516112987,"name":"apex-9c6c3a1f-b52f-49c2-8783-12eb5b1df64f","pdpGroup":"defaultGroup"} 11:50:08 policy-pap | [2024-02-21T11:48:33.006+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus 11:50:08 policy-pap | [2024-02-21T11:48:33.010+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 11:50:08 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"5d9e63f8-1b57-4cf9-bb44-4b10e7689864","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"018e3451-cab0-4ff2-972b-669a0efc11c4","timestampMs":1708516112988,"name":"apex-9c6c3a1f-b52f-49c2-8783-12eb5b1df64f","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 11:50:08 policy-pap | [2024-02-21T11:48:33.031+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-9c6c3a1f-b52f-49c2-8783-12eb5b1df64f PdpUpdate stopping 11:50:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | > upgrade 0220-sequence.sql 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | INSERT INTO sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM pdpstatistics)) 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | > upgrade 0100-jpatoscapolicy_targets.sql 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | ALTER TABLE jpatoscapolicy_targets ADD COLUMN toscaPolicyName VARCHAR(120) NOT NULL, ADD COLUMN toscaPolicyVersion VARCHAR(20) NOT NULL, ADD CONSTRAINT PK_JPATOSCAPOLICY_TARGETS PRIMARY KEY (toscaPolicyName, toscaPolicyVersion) 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | > upgrade 0110-jpatoscapolicytype_targets.sql 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | ALTER TABLE jpatoscapolicytype_targets ADD COLUMN toscaPolicyTypeName VARCHAR(120) NOT NULL, ADD COLUMN toscaPolicyTypeVersion VARCHAR(20) NOT NULL, ADD CONSTRAINT PK_JPATOSCAPOLICYTYPE_TARGETS PRIMARY KEY (toscaPolicyTypeName, toscaPolicyTypeVersion) 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | > upgrade 0120-toscatrigger.sql 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | DROP TABLE IF EXISTS toscatrigger 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | > upgrade 0130-jpatoscapolicytype_triggers.sql 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | ALTER TABLE jpatoscapolicytype_triggers MODIFY COLUMN triggers LONGBLOB 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | > upgrade 0140-toscaparameter.sql 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | DROP TABLE IF EXISTS toscaparameter 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | > upgrade 0150-toscaproperty.sql 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | DROP TABLE IF EXISTS jpatoscaproperty_constraints 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | DROP TABLE IF EXISTS jpatoscaproperty_metadata 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | DROP TABLE IF EXISTS toscaproperty 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-pap | [2024-02-21T11:48:33.031+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-9c6c3a1f-b52f-49c2-8783-12eb5b1df64f PdpUpdate stopping enqueue 11:50:08 policy-pap | [2024-02-21T11:48:33.031+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-9c6c3a1f-b52f-49c2-8783-12eb5b1df64f PdpUpdate stopping timer 11:50:08 policy-pap | [2024-02-21T11:48:33.031+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=5d9e63f8-1b57-4cf9-bb44-4b10e7689864, expireMs=1708516142940] 11:50:08 policy-pap | [2024-02-21T11:48:33.031+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-9c6c3a1f-b52f-49c2-8783-12eb5b1df64f PdpUpdate stopping listener 11:50:08 policy-pap | [2024-02-21T11:48:33.031+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-9c6c3a1f-b52f-49c2-8783-12eb5b1df64f PdpUpdate stopped 11:50:08 policy-pap | [2024-02-21T11:48:33.036+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-9c6c3a1f-b52f-49c2-8783-12eb5b1df64f PdpUpdate successful 11:50:08 policy-pap | [2024-02-21T11:48:33.037+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-9c6c3a1f-b52f-49c2-8783-12eb5b1df64f start publishing next request 11:50:08 policy-pap | [2024-02-21T11:48:33.037+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-9c6c3a1f-b52f-49c2-8783-12eb5b1df64f PdpStateChange starting 11:50:08 policy-pap | [2024-02-21T11:48:33.037+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-9c6c3a1f-b52f-49c2-8783-12eb5b1df64f PdpStateChange starting listener 11:50:08 policy-pap | [2024-02-21T11:48:33.037+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-9c6c3a1f-b52f-49c2-8783-12eb5b1df64f PdpStateChange starting timer 11:50:08 policy-pap | [2024-02-21T11:48:33.037+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer registered Timer [name=3eca3e7e-cdd4-486e-a88b-6144b7e84349, expireMs=1708516143037] 11:50:08 policy-pap | [2024-02-21T11:48:33.037+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-9c6c3a1f-b52f-49c2-8783-12eb5b1df64f PdpStateChange starting enqueue 11:50:08 policy-pap | [2024-02-21T11:48:33.038+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-9c6c3a1f-b52f-49c2-8783-12eb5b1df64f PdpStateChange started 11:50:08 policy-pap | [2024-02-21T11:48:33.038+00:00|INFO|TimerManager|Thread-10] state-change timer waiting 29999ms Timer [name=3eca3e7e-cdd4-486e-a88b-6144b7e84349, expireMs=1708516143037] 11:50:08 policy-pap | [2024-02-21T11:48:33.038+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 11:50:08 policy-pap | {"source":"pap-901e61cf-d04a-4979-8ccb-af4a8d6816b5","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"3eca3e7e-cdd4-486e-a88b-6144b7e84349","timestampMs":1708516112924,"name":"apex-9c6c3a1f-b52f-49c2-8783-12eb5b1df64f","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 11:50:08 policy-pap | [2024-02-21T11:48:33.039+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 11:50:08 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"5d9e63f8-1b57-4cf9-bb44-4b10e7689864","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"018e3451-cab0-4ff2-972b-669a0efc11c4","timestampMs":1708516112988,"name":"apex-9c6c3a1f-b52f-49c2-8783-12eb5b1df64f","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 11:50:08 policy-pap | [2024-02-21T11:48:33.040+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 5d9e63f8-1b57-4cf9-bb44-4b10e7689864 11:50:08 policy-pap | [2024-02-21T11:48:33.048+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 11:50:08 policy-pap | {"source":"pap-901e61cf-d04a-4979-8ccb-af4a8d6816b5","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"3eca3e7e-cdd4-486e-a88b-6144b7e84349","timestampMs":1708516112924,"name":"apex-9c6c3a1f-b52f-49c2-8783-12eb5b1df64f","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 11:50:08 policy-pap | [2024-02-21T11:48:33.048+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_STATE_CHANGE 11:50:08 policy-pap | [2024-02-21T11:48:33.062+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 11:50:08 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"3eca3e7e-cdd4-486e-a88b-6144b7e84349","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"8a0d37d6-a004-4f90-8449-2356eb23a00d","timestampMs":1708516113051,"name":"apex-9c6c3a1f-b52f-49c2-8783-12eb5b1df64f","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 11:50:08 policy-pap | [2024-02-21T11:48:33.062+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 3eca3e7e-cdd4-486e-a88b-6144b7e84349 11:50:08 policy-pap | [2024-02-21T11:48:33.078+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 11:50:08 policy-pap | {"source":"pap-901e61cf-d04a-4979-8ccb-af4a8d6816b5","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"3eca3e7e-cdd4-486e-a88b-6144b7e84349","timestampMs":1708516112924,"name":"apex-9c6c3a1f-b52f-49c2-8783-12eb5b1df64f","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 11:50:08 policy-pap | [2024-02-21T11:48:33.078+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATE_CHANGE 11:50:08 policy-pap | [2024-02-21T11:48:33.080+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 11:50:08 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"3eca3e7e-cdd4-486e-a88b-6144b7e84349","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"8a0d37d6-a004-4f90-8449-2356eb23a00d","timestampMs":1708516113051,"name":"apex-9c6c3a1f-b52f-49c2-8783-12eb5b1df64f","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 11:50:08 policy-pap | [2024-02-21T11:48:33.081+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-9c6c3a1f-b52f-49c2-8783-12eb5b1df64f PdpStateChange stopping 11:50:08 policy-pap | [2024-02-21T11:48:33.081+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-9c6c3a1f-b52f-49c2-8783-12eb5b1df64f PdpStateChange stopping enqueue 11:50:08 policy-pap | [2024-02-21T11:48:33.081+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-9c6c3a1f-b52f-49c2-8783-12eb5b1df64f PdpStateChange stopping timer 11:50:08 policy-pap | [2024-02-21T11:48:33.081+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer cancelled Timer [name=3eca3e7e-cdd4-486e-a88b-6144b7e84349, expireMs=1708516143037] 11:50:08 policy-pap | [2024-02-21T11:48:33.081+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-9c6c3a1f-b52f-49c2-8783-12eb5b1df64f PdpStateChange stopping listener 11:50:08 policy-pap | [2024-02-21T11:48:33.081+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-9c6c3a1f-b52f-49c2-8783-12eb5b1df64f PdpStateChange stopped 11:50:08 policy-pap | [2024-02-21T11:48:33.081+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-9c6c3a1f-b52f-49c2-8783-12eb5b1df64f PdpStateChange successful 11:50:08 policy-pap | [2024-02-21T11:48:33.081+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-9c6c3a1f-b52f-49c2-8783-12eb5b1df64f start publishing next request 11:50:08 policy-pap | [2024-02-21T11:48:33.081+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-9c6c3a1f-b52f-49c2-8783-12eb5b1df64f PdpUpdate starting 11:50:08 policy-pap | [2024-02-21T11:48:33.081+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-9c6c3a1f-b52f-49c2-8783-12eb5b1df64f PdpUpdate starting listener 11:50:08 policy-pap | [2024-02-21T11:48:33.081+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-9c6c3a1f-b52f-49c2-8783-12eb5b1df64f PdpUpdate starting timer 11:50:08 policy-pap | [2024-02-21T11:48:33.081+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer registered Timer [name=55fb9ca5-eea7-4501-b4ab-db748e5d2263, expireMs=1708516143081] 11:50:08 policy-pap | [2024-02-21T11:48:33.081+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-9c6c3a1f-b52f-49c2-8783-12eb5b1df64f PdpUpdate starting enqueue 11:50:08 policy-pap | [2024-02-21T11:48:33.081+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-9c6c3a1f-b52f-49c2-8783-12eb5b1df64f PdpUpdate started 11:50:08 policy-pap | [2024-02-21T11:48:33.082+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 11:50:08 policy-pap | {"source":"pap-901e61cf-d04a-4979-8ccb-af4a8d6816b5","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"55fb9ca5-eea7-4501-b4ab-db748e5d2263","timestampMs":1708516113070,"name":"apex-9c6c3a1f-b52f-49c2-8783-12eb5b1df64f","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 11:50:08 policy-pap | [2024-02-21T11:48:33.092+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 11:50:08 policy-pap | {"source":"pap-901e61cf-d04a-4979-8ccb-af4a8d6816b5","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"55fb9ca5-eea7-4501-b4ab-db748e5d2263","timestampMs":1708516113070,"name":"apex-9c6c3a1f-b52f-49c2-8783-12eb5b1df64f","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 11:50:08 policy-pap | [2024-02-21T11:48:33.092+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE 11:50:08 policy-pap | [2024-02-21T11:48:33.092+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 11:50:08 policy-pap | {"source":"pap-901e61cf-d04a-4979-8ccb-af4a8d6816b5","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"55fb9ca5-eea7-4501-b4ab-db748e5d2263","timestampMs":1708516113070,"name":"apex-9c6c3a1f-b52f-49c2-8783-12eb5b1df64f","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 11:50:08 policy-pap | [2024-02-21T11:48:33.092+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE 11:50:08 policy-pap | [2024-02-21T11:48:33.099+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 11:50:08 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"55fb9ca5-eea7-4501-b4ab-db748e5d2263","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"ae2231d7-bdaa-47c2-b1f4-6d8d36d4863d","timestampMs":1708516113090,"name":"apex-9c6c3a1f-b52f-49c2-8783-12eb5b1df64f","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 11:50:08 policy-pap | [2024-02-21T11:48:33.100+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 55fb9ca5-eea7-4501-b4ab-db748e5d2263 11:50:08 policy-pap | [2024-02-21T11:48:33.100+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 11:50:08 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"55fb9ca5-eea7-4501-b4ab-db748e5d2263","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"ae2231d7-bdaa-47c2-b1f4-6d8d36d4863d","timestampMs":1708516113090,"name":"apex-9c6c3a1f-b52f-49c2-8783-12eb5b1df64f","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 11:50:08 policy-pap | [2024-02-21T11:48:33.101+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-9c6c3a1f-b52f-49c2-8783-12eb5b1df64f PdpUpdate stopping 11:50:08 policy-pap | [2024-02-21T11:48:33.101+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-9c6c3a1f-b52f-49c2-8783-12eb5b1df64f PdpUpdate stopping enqueue 11:50:08 policy-pap | [2024-02-21T11:48:33.102+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-9c6c3a1f-b52f-49c2-8783-12eb5b1df64f PdpUpdate stopping timer 11:50:08 policy-pap | [2024-02-21T11:48:33.102+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=55fb9ca5-eea7-4501-b4ab-db748e5d2263, expireMs=1708516143081] 11:50:08 policy-pap | [2024-02-21T11:48:33.102+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-9c6c3a1f-b52f-49c2-8783-12eb5b1df64f PdpUpdate stopping listener 11:50:08 policy-pap | [2024-02-21T11:48:33.102+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-9c6c3a1f-b52f-49c2-8783-12eb5b1df64f PdpUpdate stopped 11:50:08 policy-pap | [2024-02-21T11:48:33.112+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-9c6c3a1f-b52f-49c2-8783-12eb5b1df64f PdpUpdate successful 11:50:08 policy-pap | [2024-02-21T11:48:33.112+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-9c6c3a1f-b52f-49c2-8783-12eb5b1df64f has no more requests 11:50:08 policy-pap | [2024-02-21T11:48:37.701+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls 11:50:08 policy-pap | [2024-02-21T11:48:37.710+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls 11:50:08 policy-pap | [2024-02-21T11:48:38.137+00:00|INFO|SessionData|http-nio-6969-exec-3] unknown group testGroup 11:50:08 policy-pap | [2024-02-21T11:48:38.714+00:00|INFO|SessionData|http-nio-6969-exec-3] create cached group testGroup 11:50:08 policy-pap | [2024-02-21T11:48:38.714+00:00|INFO|SessionData|http-nio-6969-exec-3] creating DB group testGroup 11:50:08 policy-pap | [2024-02-21T11:48:39.229+00:00|INFO|SessionData|http-nio-6969-exec-10] cache group testGroup 11:50:08 policy-pap | [2024-02-21T11:48:39.475+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-10] Registering a deploy for policy onap.restart.tca 1.0.0 11:50:08 policy-pap | [2024-02-21T11:48:39.571+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-10] Registering a deploy for policy operational.apex.decisionMaker 1.0.0 11:50:08 policy-pap | [2024-02-21T11:48:39.571+00:00|INFO|SessionData|http-nio-6969-exec-10] update cached group testGroup 11:50:08 policy-pap | [2024-02-21T11:48:39.572+00:00|INFO|SessionData|http-nio-6969-exec-10] updating DB group testGroup 11:50:08 policy-pap | [2024-02-21T11:48:39.592+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-10] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeA, policy=onap.restart.tca 1.0.0, action=DEPLOYMENT, timestamp=2024-02-21T11:48:39Z, user=policyadmin), PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeC, policy=operational.apex.decisionMaker 1.0.0, action=DEPLOYMENT, timestamp=2024-02-21T11:48:39Z, user=policyadmin)] 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | > upgrade 0160-jpapolicyaudit_pk.sql 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | ALTER TABLE jpapolicyaudit DROP PRIMARY KEY 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | ALTER TABLE jpapolicyaudit ADD CONSTRAINT PK_JPAPOLICYAUDIT PRIMARY KEY (ID) 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | > upgrade 0170-pdpstatistics_pk.sql 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | ALTER TABLE pdpstatistics DROP PRIMARY KEY 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | ALTER TABLE pdpstatistics ADD CONSTRAINT PK_PDPSTATISTICS PRIMARY KEY (ID) 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | > upgrade 0180-jpatoscanodetemplate_metadata.sql 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | ALTER TABLE jpatoscanodetemplate_metadata MODIFY COLUMN METADATA LONGTEXT 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | > upgrade 0100-upgrade.sql 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | select 'upgrade to 1100 completed' as msg 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | msg 11:50:08 policy-db-migrator | upgrade to 1100 completed 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | > upgrade 0100-jpapolicyaudit_renameuser.sql 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | ALTER TABLE jpapolicyaudit RENAME COLUMN USER TO USERNAME 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | > upgrade 0110-idx_tsidx1.sql 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | DROP INDEX IDX_TSIDX1 ON pdpstatistics 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | CREATE INDEX IDXTSIDX1 ON pdpstatistics(timeStamp, name, version) 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | > upgrade 0120-audit_sequence.sql 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS audit_sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | INSERT INTO audit_sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM jpapolicyaudit)) 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | > upgrade 0130-statistics_sequence.sql 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | CREATE TABLE IF NOT EXISTS statistics_sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | INSERT INTO statistics_sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM pdpstatistics)) 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | TRUNCATE TABLE sequence 11:50:08 policy-pap | [2024-02-21T11:48:40.329+00:00|INFO|SessionData|http-nio-6969-exec-6] cache group testGroup 11:50:08 policy-pap | [2024-02-21T11:48:40.332+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-6] remove policy onap.restart.tca 1.0.0 from subgroup testGroup pdpTypeA count=0 11:50:08 policy-pap | [2024-02-21T11:48:40.332+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-6] Registering an undeploy for policy onap.restart.tca 1.0.0 11:50:08 policy-pap | [2024-02-21T11:48:40.332+00:00|INFO|SessionData|http-nio-6969-exec-6] update cached group testGroup 11:50:08 policy-pap | [2024-02-21T11:48:40.332+00:00|INFO|SessionData|http-nio-6969-exec-6] updating DB group testGroup 11:50:08 policy-pap | [2024-02-21T11:48:40.344+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-6] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeA, policy=onap.restart.tca 1.0.0, action=UNDEPLOYMENT, timestamp=2024-02-21T11:48:40Z, user=policyadmin)] 11:50:08 policy-pap | [2024-02-21T11:48:40.676+00:00|INFO|SessionData|http-nio-6969-exec-5] cache group defaultGroup 11:50:08 policy-pap | [2024-02-21T11:48:40.677+00:00|INFO|SessionData|http-nio-6969-exec-5] cache group testGroup 11:50:08 policy-pap | [2024-02-21T11:48:40.677+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-5] remove policy operational.apex.decisionMaker 1.0.0 from subgroup testGroup pdpTypeC count=0 11:50:08 policy-pap | [2024-02-21T11:48:40.677+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-5] Registering an undeploy for policy operational.apex.decisionMaker 1.0.0 11:50:08 policy-pap | [2024-02-21T11:48:40.677+00:00|INFO|SessionData|http-nio-6969-exec-5] update cached group testGroup 11:50:08 policy-pap | [2024-02-21T11:48:40.677+00:00|INFO|SessionData|http-nio-6969-exec-5] updating DB group testGroup 11:50:08 policy-pap | [2024-02-21T11:48:40.689+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-5] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeC, policy=operational.apex.decisionMaker 1.0.0, action=UNDEPLOYMENT, timestamp=2024-02-21T11:48:40Z, user=policyadmin)] 11:50:08 policy-pap | [2024-02-21T11:49:01.250+00:00|INFO|SessionData|http-nio-6969-exec-1] cache group testGroup 11:50:08 policy-pap | [2024-02-21T11:49:01.252+00:00|INFO|SessionData|http-nio-6969-exec-1] deleting DB group testGroup 11:50:08 policy-pap | [2024-02-21T11:49:02.940+00:00|INFO|TimerManager|Thread-9] update timer discarded (expired) Timer [name=5d9e63f8-1b57-4cf9-bb44-4b10e7689864, expireMs=1708516142940] 11:50:08 policy-pap | [2024-02-21T11:49:03.037+00:00|INFO|TimerManager|Thread-10] state-change timer discarded (expired) Timer [name=3eca3e7e-cdd4-486e-a88b-6144b7e84349, expireMs=1708516143037] 11:50:08 kafka | [2024-02-21 11:48:12,113] INFO [Broker id=1] Leader __consumer_offsets-49 with topic id Some(Kd_bSyAnRqaVsDMHCxs5MA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,120] INFO [LogLoader partition=__consumer_offsets-39, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:50:08 kafka | [2024-02-21 11:48:12,121] INFO Created log for partition __consumer_offsets-39 in /var/lib/kafka/data/__consumer_offsets-39 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:50:08 kafka | [2024-02-21 11:48:12,121] INFO [Partition __consumer_offsets-39 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-39 (kafka.cluster.Partition) 11:50:08 kafka | [2024-02-21 11:48:12,121] INFO [Partition __consumer_offsets-39 broker=1] Log loaded for partition __consumer_offsets-39 with initial high watermark 0 (kafka.cluster.Partition) 11:50:08 kafka | [2024-02-21 11:48:12,121] INFO [Broker id=1] Leader __consumer_offsets-39 with topic id Some(Kd_bSyAnRqaVsDMHCxs5MA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,128] INFO [LogLoader partition=__consumer_offsets-9, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:50:08 kafka | [2024-02-21 11:48:12,129] INFO Created log for partition __consumer_offsets-9 in /var/lib/kafka/data/__consumer_offsets-9 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:50:08 kafka | [2024-02-21 11:48:12,129] INFO [Partition __consumer_offsets-9 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-9 (kafka.cluster.Partition) 11:50:08 kafka | [2024-02-21 11:48:12,129] INFO [Partition __consumer_offsets-9 broker=1] Log loaded for partition __consumer_offsets-9 with initial high watermark 0 (kafka.cluster.Partition) 11:50:08 kafka | [2024-02-21 11:48:12,129] INFO [Broker id=1] Leader __consumer_offsets-9 with topic id Some(Kd_bSyAnRqaVsDMHCxs5MA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,136] INFO [LogLoader partition=__consumer_offsets-24, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:50:08 kafka | [2024-02-21 11:48:12,137] INFO Created log for partition __consumer_offsets-24 in /var/lib/kafka/data/__consumer_offsets-24 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:50:08 kafka | [2024-02-21 11:48:12,137] INFO [Partition __consumer_offsets-24 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-24 (kafka.cluster.Partition) 11:50:08 kafka | [2024-02-21 11:48:12,137] INFO [Partition __consumer_offsets-24 broker=1] Log loaded for partition __consumer_offsets-24 with initial high watermark 0 (kafka.cluster.Partition) 11:50:08 kafka | [2024-02-21 11:48:12,137] INFO [Broker id=1] Leader __consumer_offsets-24 with topic id Some(Kd_bSyAnRqaVsDMHCxs5MA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,150] INFO [LogLoader partition=__consumer_offsets-31, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:50:08 kafka | [2024-02-21 11:48:12,150] INFO Created log for partition __consumer_offsets-31 in /var/lib/kafka/data/__consumer_offsets-31 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:50:08 kafka | [2024-02-21 11:48:12,151] INFO [Partition __consumer_offsets-31 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-31 (kafka.cluster.Partition) 11:50:08 kafka | [2024-02-21 11:48:12,151] INFO [Partition __consumer_offsets-31 broker=1] Log loaded for partition __consumer_offsets-31 with initial high watermark 0 (kafka.cluster.Partition) 11:50:08 kafka | [2024-02-21 11:48:12,151] INFO [Broker id=1] Leader __consumer_offsets-31 with topic id Some(Kd_bSyAnRqaVsDMHCxs5MA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,160] INFO [LogLoader partition=__consumer_offsets-46, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:50:08 kafka | [2024-02-21 11:48:12,161] INFO Created log for partition __consumer_offsets-46 in /var/lib/kafka/data/__consumer_offsets-46 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:50:08 kafka | [2024-02-21 11:48:12,161] INFO [Partition __consumer_offsets-46 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-46 (kafka.cluster.Partition) 11:50:08 kafka | [2024-02-21 11:48:12,161] INFO [Partition __consumer_offsets-46 broker=1] Log loaded for partition __consumer_offsets-46 with initial high watermark 0 (kafka.cluster.Partition) 11:50:08 kafka | [2024-02-21 11:48:12,161] INFO [Broker id=1] Leader __consumer_offsets-46 with topic id Some(Kd_bSyAnRqaVsDMHCxs5MA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,169] INFO [LogLoader partition=__consumer_offsets-1, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:50:08 kafka | [2024-02-21 11:48:12,170] INFO Created log for partition __consumer_offsets-1 in /var/lib/kafka/data/__consumer_offsets-1 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:50:08 kafka | [2024-02-21 11:48:12,170] INFO [Partition __consumer_offsets-1 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-1 (kafka.cluster.Partition) 11:50:08 kafka | [2024-02-21 11:48:12,171] INFO [Partition __consumer_offsets-1 broker=1] Log loaded for partition __consumer_offsets-1 with initial high watermark 0 (kafka.cluster.Partition) 11:50:08 kafka | [2024-02-21 11:48:12,171] INFO [Broker id=1] Leader __consumer_offsets-1 with topic id Some(Kd_bSyAnRqaVsDMHCxs5MA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,179] INFO [LogLoader partition=__consumer_offsets-16, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:50:08 kafka | [2024-02-21 11:48:12,179] INFO Created log for partition __consumer_offsets-16 in /var/lib/kafka/data/__consumer_offsets-16 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:50:08 kafka | [2024-02-21 11:48:12,179] INFO [Partition __consumer_offsets-16 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-16 (kafka.cluster.Partition) 11:50:08 kafka | [2024-02-21 11:48:12,179] INFO [Partition __consumer_offsets-16 broker=1] Log loaded for partition __consumer_offsets-16 with initial high watermark 0 (kafka.cluster.Partition) 11:50:08 kafka | [2024-02-21 11:48:12,179] INFO [Broker id=1] Leader __consumer_offsets-16 with topic id Some(Kd_bSyAnRqaVsDMHCxs5MA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,195] INFO [LogLoader partition=__consumer_offsets-2, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:50:08 kafka | [2024-02-21 11:48:12,196] INFO Created log for partition __consumer_offsets-2 in /var/lib/kafka/data/__consumer_offsets-2 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:50:08 kafka | [2024-02-21 11:48:12,196] INFO [Partition __consumer_offsets-2 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-2 (kafka.cluster.Partition) 11:50:08 kafka | [2024-02-21 11:48:12,196] INFO [Partition __consumer_offsets-2 broker=1] Log loaded for partition __consumer_offsets-2 with initial high watermark 0 (kafka.cluster.Partition) 11:50:08 kafka | [2024-02-21 11:48:12,197] INFO [Broker id=1] Leader __consumer_offsets-2 with topic id Some(Kd_bSyAnRqaVsDMHCxs5MA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,204] INFO [LogLoader partition=__consumer_offsets-25, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:50:08 kafka | [2024-02-21 11:48:12,205] INFO Created log for partition __consumer_offsets-25 in /var/lib/kafka/data/__consumer_offsets-25 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:50:08 kafka | [2024-02-21 11:48:12,205] INFO [Partition __consumer_offsets-25 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-25 (kafka.cluster.Partition) 11:50:08 kafka | [2024-02-21 11:48:12,205] INFO [Partition __consumer_offsets-25 broker=1] Log loaded for partition __consumer_offsets-25 with initial high watermark 0 (kafka.cluster.Partition) 11:50:08 kafka | [2024-02-21 11:48:12,205] INFO [Broker id=1] Leader __consumer_offsets-25 with topic id Some(Kd_bSyAnRqaVsDMHCxs5MA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,214] INFO [LogLoader partition=__consumer_offsets-40, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:50:08 kafka | [2024-02-21 11:48:12,215] INFO Created log for partition __consumer_offsets-40 in /var/lib/kafka/data/__consumer_offsets-40 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:50:08 kafka | [2024-02-21 11:48:12,215] INFO [Partition __consumer_offsets-40 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-40 (kafka.cluster.Partition) 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | > upgrade 0100-pdpstatistics.sql 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | DROP INDEX IDXTSIDX1 ON pdpstatistics 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | DROP TABLE pdpstatistics 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | DROP TABLE jpapdpstatistics_enginestats 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | > upgrade 0120-statistics_sequence.sql 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | DROP TABLE statistics_sequence 11:50:08 policy-db-migrator | -------------- 11:50:08 policy-db-migrator | 11:50:08 policy-db-migrator | policyadmin: OK: upgrade (1300) 11:50:08 policy-db-migrator | name version 11:50:08 policy-db-migrator | policyadmin 1300 11:50:08 policy-db-migrator | ID script operation from_version to_version tag success atTime 11:50:08 policy-db-migrator | 1 0100-jpapdpgroup_properties.sql upgrade 0 0800 2102241147390800u 1 2024-02-21 11:47:39 11:50:08 policy-db-migrator | 2 0110-jpapdpstatistics_enginestats.sql upgrade 0 0800 2102241147390800u 1 2024-02-21 11:47:39 11:50:08 policy-db-migrator | 3 0120-jpapdpsubgroup_policies.sql upgrade 0 0800 2102241147390800u 1 2024-02-21 11:47:39 11:50:08 policy-db-migrator | 4 0130-jpapdpsubgroup_properties.sql upgrade 0 0800 2102241147390800u 1 2024-02-21 11:47:39 11:50:08 policy-db-migrator | 5 0140-jpapdpsubgroup_supportedpolicytypes.sql upgrade 0 0800 2102241147390800u 1 2024-02-21 11:47:39 11:50:08 policy-db-migrator | 6 0150-jpatoscacapabilityassignment_attributes.sql upgrade 0 0800 2102241147390800u 1 2024-02-21 11:47:39 11:50:08 policy-db-migrator | 7 0160-jpatoscacapabilityassignment_metadata.sql upgrade 0 0800 2102241147390800u 1 2024-02-21 11:47:39 11:50:08 policy-db-migrator | 8 0170-jpatoscacapabilityassignment_occurrences.sql upgrade 0 0800 2102241147390800u 1 2024-02-21 11:47:39 11:50:08 policy-db-migrator | 9 0180-jpatoscacapabilityassignment_properties.sql upgrade 0 0800 2102241147390800u 1 2024-02-21 11:47:39 11:50:08 policy-db-migrator | 10 0190-jpatoscacapabilitytype_metadata.sql upgrade 0 0800 2102241147390800u 1 2024-02-21 11:47:39 11:50:08 policy-db-migrator | 11 0200-jpatoscacapabilitytype_properties.sql upgrade 0 0800 2102241147390800u 1 2024-02-21 11:47:40 11:50:08 policy-db-migrator | 12 0210-jpatoscadatatype_constraints.sql upgrade 0 0800 2102241147390800u 1 2024-02-21 11:47:40 11:50:08 policy-db-migrator | 13 0220-jpatoscadatatype_metadata.sql upgrade 0 0800 2102241147390800u 1 2024-02-21 11:47:40 11:50:08 policy-db-migrator | 14 0230-jpatoscadatatype_properties.sql upgrade 0 0800 2102241147390800u 1 2024-02-21 11:47:40 11:50:08 policy-db-migrator | 15 0240-jpatoscanodetemplate_metadata.sql upgrade 0 0800 2102241147390800u 1 2024-02-21 11:47:40 11:50:08 policy-db-migrator | 16 0250-jpatoscanodetemplate_properties.sql upgrade 0 0800 2102241147390800u 1 2024-02-21 11:47:40 11:50:08 policy-db-migrator | 17 0260-jpatoscanodetype_metadata.sql upgrade 0 0800 2102241147390800u 1 2024-02-21 11:47:40 11:50:08 policy-db-migrator | 18 0270-jpatoscanodetype_properties.sql upgrade 0 0800 2102241147390800u 1 2024-02-21 11:47:40 11:50:08 policy-db-migrator | 19 0280-jpatoscapolicy_metadata.sql upgrade 0 0800 2102241147390800u 1 2024-02-21 11:47:40 11:50:08 policy-db-migrator | 20 0290-jpatoscapolicy_properties.sql upgrade 0 0800 2102241147390800u 1 2024-02-21 11:47:40 11:50:08 policy-db-migrator | 21 0300-jpatoscapolicy_targets.sql upgrade 0 0800 2102241147390800u 1 2024-02-21 11:47:40 11:50:08 policy-db-migrator | 22 0310-jpatoscapolicytype_metadata.sql upgrade 0 0800 2102241147390800u 1 2024-02-21 11:47:40 11:50:08 policy-db-migrator | 23 0320-jpatoscapolicytype_properties.sql upgrade 0 0800 2102241147390800u 1 2024-02-21 11:47:40 11:50:08 policy-db-migrator | 24 0330-jpatoscapolicytype_targets.sql upgrade 0 0800 2102241147390800u 1 2024-02-21 11:47:40 11:50:08 policy-db-migrator | 25 0340-jpatoscapolicytype_triggers.sql upgrade 0 0800 2102241147390800u 1 2024-02-21 11:47:40 11:50:08 policy-db-migrator | 26 0350-jpatoscaproperty_constraints.sql upgrade 0 0800 2102241147390800u 1 2024-02-21 11:47:40 11:50:08 policy-db-migrator | 27 0360-jpatoscaproperty_metadata.sql upgrade 0 0800 2102241147390800u 1 2024-02-21 11:47:40 11:50:08 policy-db-migrator | 28 0370-jpatoscarelationshiptype_metadata.sql upgrade 0 0800 2102241147390800u 1 2024-02-21 11:47:40 11:50:08 policy-db-migrator | 29 0380-jpatoscarelationshiptype_properties.sql upgrade 0 0800 2102241147390800u 1 2024-02-21 11:47:40 11:50:08 policy-db-migrator | 30 0390-jpatoscarequirement_metadata.sql upgrade 0 0800 2102241147390800u 1 2024-02-21 11:47:40 11:50:08 policy-db-migrator | 31 0400-jpatoscarequirement_occurrences.sql upgrade 0 0800 2102241147390800u 1 2024-02-21 11:47:40 11:50:08 policy-db-migrator | 32 0410-jpatoscarequirement_properties.sql upgrade 0 0800 2102241147390800u 1 2024-02-21 11:47:40 11:50:08 policy-db-migrator | 33 0420-jpatoscaservicetemplate_metadata.sql upgrade 0 0800 2102241147390800u 1 2024-02-21 11:47:40 11:50:08 policy-db-migrator | 34 0430-jpatoscatopologytemplate_inputs.sql upgrade 0 0800 2102241147390800u 1 2024-02-21 11:47:41 11:50:08 policy-db-migrator | 35 0440-pdpgroup_pdpsubgroup.sql upgrade 0 0800 2102241147390800u 1 2024-02-21 11:47:41 11:50:08 policy-db-migrator | 36 0450-pdpgroup.sql upgrade 0 0800 2102241147390800u 1 2024-02-21 11:47:41 11:50:08 policy-db-migrator | 37 0460-pdppolicystatus.sql upgrade 0 0800 2102241147390800u 1 2024-02-21 11:47:41 11:50:08 policy-db-migrator | 38 0470-pdp.sql upgrade 0 0800 2102241147390800u 1 2024-02-21 11:47:41 11:50:08 policy-db-migrator | 39 0480-pdpstatistics.sql upgrade 0 0800 2102241147390800u 1 2024-02-21 11:47:41 11:50:08 policy-db-migrator | 40 0490-pdpsubgroup_pdp.sql upgrade 0 0800 2102241147390800u 1 2024-02-21 11:47:41 11:50:08 policy-db-migrator | 41 0500-pdpsubgroup.sql upgrade 0 0800 2102241147390800u 1 2024-02-21 11:47:41 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.770250107Z level=info msg="Executing migration" id=create_alert_configuration_history_table 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.771212148Z level=info msg="Migration successfully executed" id=create_alert_configuration_history_table duration=961.661µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.77822526Z level=info msg="Executing migration" id="drop non-unique orgID index on alert_configuration" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.78019592Z level=info msg="Migration successfully executed" id="drop non-unique orgID index on alert_configuration" duration=1.97143ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.784576605Z level=info msg="Executing migration" id="drop unique orgID index on alert_configuration if exists" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.785014959Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop unique orgID index on alert_configuration if exists" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.788610657Z level=info msg="Executing migration" id="extract alertmanager configuration history to separate table" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.789081552Z level=info msg="Migration successfully executed" id="extract alertmanager configuration history to separate table" duration=470.525µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.794180284Z level=info msg="Executing migration" id="add unique index on orgID to alert_configuration" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.795851021Z level=info msg="Migration successfully executed" id="add unique index on orgID to alert_configuration" duration=1.669947ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.802569291Z level=info msg="Executing migration" id="add last_applied column to alert_configuration_history" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.809155719Z level=info msg="Migration successfully executed" id="add last_applied column to alert_configuration_history" duration=6.579448ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.814387263Z level=info msg="Executing migration" id="create library_element table v1" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.815383433Z level=info msg="Migration successfully executed" id="create library_element table v1" duration=996.38µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.821466197Z level=info msg="Executing migration" id="add index library_element org_id-folder_id-name-kind" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.823275925Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_id-name-kind" duration=1.809478ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.82765915Z level=info msg="Executing migration" id="create library_element_connection table v1" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.828471578Z level=info msg="Migration successfully executed" id="create library_element_connection table v1" duration=822.588µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.834599552Z level=info msg="Executing migration" id="add index library_element_connection element_id-kind-connection_id" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.835737184Z level=info msg="Migration successfully executed" id="add index library_element_connection element_id-kind-connection_id" duration=1.137522ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.840569203Z level=info msg="Executing migration" id="add unique index library_element org_id_uid" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.841925198Z level=info msg="Migration successfully executed" id="add unique index library_element org_id_uid" duration=1.355665ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.845594925Z level=info msg="Executing migration" id="increase max description length to 2048" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.845693576Z level=info msg="Migration successfully executed" id="increase max description length to 2048" duration=99.511µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.850592127Z level=info msg="Executing migration" id="alter library_element model to mediumtext" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.850785149Z level=info msg="Migration successfully executed" id="alter library_element model to mediumtext" duration=192.912µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.861137326Z level=info msg="Executing migration" id="clone move dashboard alerts to unified alerting" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.86152019Z level=info msg="Migration successfully executed" id="clone move dashboard alerts to unified alerting" duration=382.824µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.865785664Z level=info msg="Executing migration" id="create data_keys table" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.86728994Z level=info msg="Migration successfully executed" id="create data_keys table" duration=1.506286ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.871716265Z level=info msg="Executing migration" id="create secrets table" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.872551384Z level=info msg="Migration successfully executed" id="create secrets table" duration=835.049µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.878787628Z level=info msg="Executing migration" id="rename data_keys name column to id" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.925961965Z level=info msg="Migration successfully executed" id="rename data_keys name column to id" duration=47.175467ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.932893447Z level=info msg="Executing migration" id="add name column into data_keys" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.937953029Z level=info msg="Migration successfully executed" id="add name column into data_keys" duration=5.055692ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.944559587Z level=info msg="Executing migration" id="copy data_keys id column values into name" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.945064083Z level=info msg="Migration successfully executed" id="copy data_keys id column values into name" duration=505.786µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:44.954175537Z level=info msg="Executing migration" id="rename data_keys name column to label" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.00094434Z level=info msg="Migration successfully executed" id="rename data_keys name column to label" duration=46.768053ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.064750308Z level=info msg="Executing migration" id="rename data_keys id column back to name" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.114866245Z level=info msg="Migration successfully executed" id="rename data_keys id column back to name" duration=50.111567ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.11918176Z level=info msg="Executing migration" id="create kv_store table v1" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.119829197Z level=info msg="Migration successfully executed" id="create kv_store table v1" duration=646.357µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.124230503Z level=info msg="Executing migration" id="add index kv_store.org_id-namespace-key" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.125519926Z level=info msg="Migration successfully executed" id="add index kv_store.org_id-namespace-key" duration=1.289313ms 11:50:08 policy-db-migrator | 42 0510-toscacapabilityassignment.sql upgrade 0 0800 2102241147390800u 1 2024-02-21 11:47:41 11:50:08 policy-db-migrator | 43 0520-toscacapabilityassignments.sql upgrade 0 0800 2102241147390800u 1 2024-02-21 11:47:41 11:50:08 policy-db-migrator | 44 0530-toscacapabilityassignments_toscacapabilityassignment.sql upgrade 0 0800 2102241147390800u 1 2024-02-21 11:47:41 11:50:08 policy-db-migrator | 45 0540-toscacapabilitytype.sql upgrade 0 0800 2102241147390800u 1 2024-02-21 11:47:41 11:50:08 policy-db-migrator | 46 0550-toscacapabilitytypes.sql upgrade 0 0800 2102241147390800u 1 2024-02-21 11:47:41 11:50:08 policy-db-migrator | 47 0560-toscacapabilitytypes_toscacapabilitytype.sql upgrade 0 0800 2102241147390800u 1 2024-02-21 11:47:41 11:50:08 policy-db-migrator | 48 0570-toscadatatype.sql upgrade 0 0800 2102241147390800u 1 2024-02-21 11:47:41 11:50:08 policy-db-migrator | 49 0580-toscadatatypes.sql upgrade 0 0800 2102241147390800u 1 2024-02-21 11:47:41 11:50:08 policy-db-migrator | 50 0590-toscadatatypes_toscadatatype.sql upgrade 0 0800 2102241147390800u 1 2024-02-21 11:47:41 11:50:08 policy-db-migrator | 51 0600-toscanodetemplate.sql upgrade 0 0800 2102241147390800u 1 2024-02-21 11:47:41 11:50:08 policy-db-migrator | 52 0610-toscanodetemplates.sql upgrade 0 0800 2102241147390800u 1 2024-02-21 11:47:41 11:50:08 policy-db-migrator | 53 0620-toscanodetemplates_toscanodetemplate.sql upgrade 0 0800 2102241147390800u 1 2024-02-21 11:47:41 11:50:08 policy-db-migrator | 54 0630-toscanodetype.sql upgrade 0 0800 2102241147390800u 1 2024-02-21 11:47:41 11:50:08 policy-db-migrator | 55 0640-toscanodetypes.sql upgrade 0 0800 2102241147390800u 1 2024-02-21 11:47:41 11:50:08 policy-db-migrator | 56 0650-toscanodetypes_toscanodetype.sql upgrade 0 0800 2102241147390800u 1 2024-02-21 11:47:41 11:50:08 policy-db-migrator | 57 0660-toscaparameter.sql upgrade 0 0800 2102241147390800u 1 2024-02-21 11:47:41 11:50:08 policy-db-migrator | 58 0670-toscapolicies.sql upgrade 0 0800 2102241147390800u 1 2024-02-21 11:47:42 11:50:08 policy-db-migrator | 59 0680-toscapolicies_toscapolicy.sql upgrade 0 0800 2102241147390800u 1 2024-02-21 11:47:42 11:50:08 policy-db-migrator | 60 0690-toscapolicy.sql upgrade 0 0800 2102241147390800u 1 2024-02-21 11:47:42 11:50:08 policy-db-migrator | 61 0700-toscapolicytype.sql upgrade 0 0800 2102241147390800u 1 2024-02-21 11:47:42 11:50:08 policy-db-migrator | 62 0710-toscapolicytypes.sql upgrade 0 0800 2102241147390800u 1 2024-02-21 11:47:42 11:50:08 policy-db-migrator | 63 0720-toscapolicytypes_toscapolicytype.sql upgrade 0 0800 2102241147390800u 1 2024-02-21 11:47:42 11:50:08 policy-db-migrator | 64 0730-toscaproperty.sql upgrade 0 0800 2102241147390800u 1 2024-02-21 11:47:42 11:50:08 policy-db-migrator | 65 0740-toscarelationshiptype.sql upgrade 0 0800 2102241147390800u 1 2024-02-21 11:47:42 11:50:08 policy-db-migrator | 66 0750-toscarelationshiptypes.sql upgrade 0 0800 2102241147390800u 1 2024-02-21 11:47:42 11:50:08 policy-db-migrator | 67 0760-toscarelationshiptypes_toscarelationshiptype.sql upgrade 0 0800 2102241147390800u 1 2024-02-21 11:47:42 11:50:08 policy-db-migrator | 68 0770-toscarequirement.sql upgrade 0 0800 2102241147390800u 1 2024-02-21 11:47:42 11:50:08 policy-db-migrator | 69 0780-toscarequirements.sql upgrade 0 0800 2102241147390800u 1 2024-02-21 11:47:42 11:50:08 policy-db-migrator | 70 0790-toscarequirements_toscarequirement.sql upgrade 0 0800 2102241147390800u 1 2024-02-21 11:47:42 11:50:08 policy-db-migrator | 71 0800-toscaservicetemplate.sql upgrade 0 0800 2102241147390800u 1 2024-02-21 11:47:42 11:50:08 policy-db-migrator | 72 0810-toscatopologytemplate.sql upgrade 0 0800 2102241147390800u 1 2024-02-21 11:47:42 11:50:08 policy-db-migrator | 73 0820-toscatrigger.sql upgrade 0 0800 2102241147390800u 1 2024-02-21 11:47:42 11:50:08 policy-db-migrator | 74 0830-FK_ToscaNodeTemplate_capabilitiesName.sql upgrade 0 0800 2102241147390800u 1 2024-02-21 11:47:42 11:50:08 policy-db-migrator | 75 0840-FK_ToscaNodeTemplate_requirementsName.sql upgrade 0 0800 2102241147390800u 1 2024-02-21 11:47:42 11:50:08 policy-db-migrator | 76 0850-FK_ToscaNodeType_requirementsName.sql upgrade 0 0800 2102241147390800u 1 2024-02-21 11:47:42 11:50:08 policy-db-migrator | 77 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql upgrade 0 0800 2102241147390800u 1 2024-02-21 11:47:42 11:50:08 policy-db-migrator | 78 0870-FK_ToscaServiceTemplate_dataTypesName.sql upgrade 0 0800 2102241147390800u 1 2024-02-21 11:47:42 11:50:08 policy-db-migrator | 79 0880-FK_ToscaServiceTemplate_nodeTypesName.sql upgrade 0 0800 2102241147390800u 1 2024-02-21 11:47:42 11:50:08 policy-db-migrator | 80 0890-FK_ToscaServiceTemplate_policyTypesName.sql upgrade 0 0800 2102241147390800u 1 2024-02-21 11:47:42 11:50:08 policy-db-migrator | 81 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql upgrade 0 0800 2102241147390800u 1 2024-02-21 11:47:43 11:50:08 policy-db-migrator | 82 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql upgrade 0 0800 2102241147390800u 1 2024-02-21 11:47:43 11:50:08 policy-db-migrator | 83 0920-FK_ToscaTopologyTemplate_policyName.sql upgrade 0 0800 2102241147390800u 1 2024-02-21 11:47:43 11:50:08 policy-db-migrator | 84 0940-PdpPolicyStatus_PdpGroup.sql upgrade 0 0800 2102241147390800u 1 2024-02-21 11:47:43 11:50:08 policy-db-migrator | 85 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql upgrade 0 0800 2102241147390800u 1 2024-02-21 11:47:43 11:50:08 policy-db-migrator | 86 0960-FK_ToscaNodeTemplate_capabilitiesName.sql upgrade 0 0800 2102241147390800u 1 2024-02-21 11:47:43 11:50:08 policy-db-migrator | 87 0970-FK_ToscaNodeTemplate_requirementsName.sql upgrade 0 0800 2102241147390800u 1 2024-02-21 11:47:43 11:50:08 policy-db-migrator | 88 0980-FK_ToscaNodeType_requirementsName.sql upgrade 0 0800 2102241147390800u 1 2024-02-21 11:47:43 11:50:08 policy-db-migrator | 89 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql upgrade 0 0800 2102241147390800u 1 2024-02-21 11:47:43 11:50:08 policy-db-migrator | 90 1000-FK_ToscaServiceTemplate_dataTypesName.sql upgrade 0 0800 2102241147390800u 1 2024-02-21 11:47:43 11:50:08 policy-db-migrator | 91 1010-FK_ToscaServiceTemplate_nodeTypesName.sql upgrade 0 0800 2102241147390800u 1 2024-02-21 11:47:43 11:50:08 policy-db-migrator | 92 1020-FK_ToscaServiceTemplate_policyTypesName.sql upgrade 0 0800 2102241147390800u 1 2024-02-21 11:47:43 11:50:08 policy-db-migrator | 93 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql upgrade 0 0800 2102241147390800u 1 2024-02-21 11:47:43 11:50:08 policy-db-migrator | 94 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql upgrade 0 0800 2102241147390800u 1 2024-02-21 11:47:43 11:50:08 policy-db-migrator | 95 1050-FK_ToscaTopologyTemplate_policyName.sql upgrade 0 0800 2102241147390800u 1 2024-02-21 11:47:44 11:50:08 policy-db-migrator | 96 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql upgrade 0 0800 2102241147390800u 1 2024-02-21 11:47:44 11:50:08 policy-db-migrator | 97 0100-pdp.sql upgrade 0800 0900 2102241147390900u 1 2024-02-21 11:47:44 11:50:08 policy-db-migrator | 98 0110-idx_tsidx1.sql upgrade 0800 0900 2102241147390900u 1 2024-02-21 11:47:44 11:50:08 policy-db-migrator | 99 0120-pk_pdpstatistics.sql upgrade 0800 0900 2102241147390900u 1 2024-02-21 11:47:44 11:50:08 policy-db-migrator | 100 0130-pdpstatistics.sql upgrade 0800 0900 2102241147390900u 1 2024-02-21 11:47:44 11:50:08 policy-db-migrator | 101 0140-pk_pdpstatistics.sql upgrade 0800 0900 2102241147390900u 1 2024-02-21 11:47:44 11:50:08 policy-db-migrator | 102 0150-pdpstatistics.sql upgrade 0800 0900 2102241147390900u 1 2024-02-21 11:47:44 11:50:08 policy-db-migrator | 103 0160-jpapdpstatistics_enginestats.sql upgrade 0800 0900 2102241147390900u 1 2024-02-21 11:47:44 11:50:08 policy-db-migrator | 104 0170-jpapdpstatistics_enginestats.sql upgrade 0800 0900 2102241147390900u 1 2024-02-21 11:47:44 11:50:08 policy-db-migrator | 105 0180-jpapdpstatistics_enginestats.sql upgrade 0800 0900 2102241147390900u 1 2024-02-21 11:47:44 11:50:08 policy-db-migrator | 106 0190-jpapolicyaudit.sql upgrade 0800 0900 2102241147390900u 1 2024-02-21 11:47:44 11:50:08 policy-db-migrator | 107 0200-JpaPolicyAuditIndex_timestamp.sql upgrade 0800 0900 2102241147390900u 1 2024-02-21 11:47:44 11:50:08 policy-db-migrator | 108 0210-sequence.sql upgrade 0800 0900 2102241147390900u 1 2024-02-21 11:47:44 11:50:08 policy-db-migrator | 109 0220-sequence.sql upgrade 0800 0900 2102241147390900u 1 2024-02-21 11:47:44 11:50:08 policy-db-migrator | 110 0100-jpatoscapolicy_targets.sql upgrade 0900 1000 2102241147391000u 1 2024-02-21 11:47:44 11:50:08 policy-db-migrator | 111 0110-jpatoscapolicytype_targets.sql upgrade 0900 1000 2102241147391000u 1 2024-02-21 11:47:45 11:50:08 policy-db-migrator | 112 0120-toscatrigger.sql upgrade 0900 1000 2102241147391000u 1 2024-02-21 11:47:45 11:50:08 policy-db-migrator | 113 0130-jpatoscapolicytype_triggers.sql upgrade 0900 1000 2102241147391000u 1 2024-02-21 11:47:45 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.130688729Z level=info msg="Executing migration" id="update dashboard_uid and panel_id from existing annotations" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.130958602Z level=info msg="Migration successfully executed" id="update dashboard_uid and panel_id from existing annotations" duration=270.573µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.134810081Z level=info msg="Executing migration" id="create permission table" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.135708811Z level=info msg="Migration successfully executed" id="create permission table" duration=898.5µs 11:50:08 policy-db-migrator | 114 0140-toscaparameter.sql upgrade 0900 1000 2102241147391000u 1 2024-02-21 11:47:45 11:50:08 policy-db-migrator | 115 0150-toscaproperty.sql upgrade 0900 1000 2102241147391000u 1 2024-02-21 11:47:45 11:50:08 policy-db-migrator | 116 0160-jpapolicyaudit_pk.sql upgrade 0900 1000 2102241147391000u 1 2024-02-21 11:47:45 11:50:08 policy-db-migrator | 117 0170-pdpstatistics_pk.sql upgrade 0900 1000 2102241147391000u 1 2024-02-21 11:47:45 11:50:08 policy-db-migrator | 118 0180-jpatoscanodetemplate_metadata.sql upgrade 0900 1000 2102241147391000u 1 2024-02-21 11:47:45 11:50:08 policy-db-migrator | 119 0100-upgrade.sql upgrade 1000 1100 2102241147391100u 1 2024-02-21 11:47:45 11:50:08 policy-db-migrator | 120 0100-jpapolicyaudit_renameuser.sql upgrade 1100 1200 2102241147391200u 1 2024-02-21 11:47:45 11:50:08 policy-db-migrator | 121 0110-idx_tsidx1.sql upgrade 1100 1200 2102241147391200u 1 2024-02-21 11:47:45 11:50:08 policy-db-migrator | 122 0120-audit_sequence.sql upgrade 1100 1200 2102241147391200u 1 2024-02-21 11:47:45 11:50:08 policy-db-migrator | 123 0130-statistics_sequence.sql upgrade 1100 1200 2102241147391200u 1 2024-02-21 11:47:45 11:50:08 policy-db-migrator | 124 0100-pdpstatistics.sql upgrade 1200 1300 2102241147391300u 1 2024-02-21 11:47:45 11:50:08 policy-db-migrator | 125 0110-jpapdpstatistics_enginestats.sql upgrade 1200 1300 2102241147391300u 1 2024-02-21 11:47:45 11:50:08 policy-db-migrator | 126 0120-statistics_sequence.sql upgrade 1200 1300 2102241147391300u 1 2024-02-21 11:47:45 11:50:08 policy-db-migrator | policyadmin: OK @ 1300 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.140272808Z level=info msg="Executing migration" id="add unique index permission.role_id" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.142695843Z level=info msg="Migration successfully executed" id="add unique index permission.role_id" duration=2.415946ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.149531154Z level=info msg="Executing migration" id="add unique index role_id_action_scope" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.150664185Z level=info msg="Migration successfully executed" id="add unique index role_id_action_scope" duration=1.132791ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.159983691Z level=info msg="Executing migration" id="create role table" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.161198394Z level=info msg="Migration successfully executed" id="create role table" duration=1.214203ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.172766763Z level=info msg="Executing migration" id="add column display_name" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.184345163Z level=info msg="Migration successfully executed" id="add column display_name" duration=11.57861ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.1908556Z level=info msg="Executing migration" id="add column group_name" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.202278127Z level=info msg="Migration successfully executed" id="add column group_name" duration=11.419357ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.21118222Z level=info msg="Executing migration" id="add index role.org_id" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.211997868Z level=info msg="Migration successfully executed" id="add index role.org_id" duration=815.058µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.215612416Z level=info msg="Executing migration" id="add unique index role_org_id_name" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.217291253Z level=info msg="Migration successfully executed" id="add unique index role_org_id_name" duration=1.677057ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.222161994Z level=info msg="Executing migration" id="add index role_org_id_uid" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.223274345Z level=info msg="Migration successfully executed" id="add index role_org_id_uid" duration=1.111861ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.227557759Z level=info msg="Executing migration" id="create team role table" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.228679121Z level=info msg="Migration successfully executed" id="create team role table" duration=1.124992ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.233204707Z level=info msg="Executing migration" id="add index team_role.org_id" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.235026616Z level=info msg="Migration successfully executed" id="add index team_role.org_id" duration=1.821609ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.240068848Z level=info msg="Executing migration" id="add unique index team_role_org_id_team_id_role_id" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.24125827Z level=info msg="Migration successfully executed" id="add unique index team_role_org_id_team_id_role_id" duration=1.152482ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.246555315Z level=info msg="Executing migration" id="add index team_role.team_id" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.249018281Z level=info msg="Migration successfully executed" id="add index team_role.team_id" duration=2.462016ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.255232854Z level=info msg="Executing migration" id="create user role table" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.256309006Z level=info msg="Migration successfully executed" id="create user role table" duration=1.075732ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.263315918Z level=info msg="Executing migration" id="add index user_role.org_id" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.264807394Z level=info msg="Migration successfully executed" id="add index user_role.org_id" duration=1.490816ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.27024153Z level=info msg="Executing migration" id="add unique index user_role_org_id_user_id_role_id" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.271319381Z level=info msg="Migration successfully executed" id="add unique index user_role_org_id_user_id_role_id" duration=1.074611ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.275378703Z level=info msg="Executing migration" id="add index user_role.user_id" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.276443094Z level=info msg="Migration successfully executed" id="add index user_role.user_id" duration=1.063801ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.281272294Z level=info msg="Executing migration" id="create builtin role table" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.282655478Z level=info msg="Migration successfully executed" id="create builtin role table" duration=1.382693ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.288371657Z level=info msg="Executing migration" id="add index builtin_role.role_id" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.289421708Z level=info msg="Migration successfully executed" id="add index builtin_role.role_id" duration=1.052931ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.293012575Z level=info msg="Executing migration" id="add index builtin_role.name" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.294992256Z level=info msg="Migration successfully executed" id="add index builtin_role.name" duration=1.979731ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.299856505Z level=info msg="Executing migration" id="Add column org_id to builtin_role table" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.309105851Z level=info msg="Migration successfully executed" id="Add column org_id to builtin_role table" duration=9.254366ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.314866071Z level=info msg="Executing migration" id="add index builtin_role.org_id" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.315935681Z level=info msg="Migration successfully executed" id="add index builtin_role.org_id" duration=1.06932ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.327780223Z level=info msg="Executing migration" id="add unique index builtin_role_org_id_role_id_role" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.329823275Z level=info msg="Migration successfully executed" id="add unique index builtin_role_org_id_role_id_role" duration=2.050582ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.341149502Z level=info msg="Executing migration" id="Remove unique index role_org_id_uid" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.342240563Z level=info msg="Migration successfully executed" id="Remove unique index role_org_id_uid" duration=1.091301ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.348401037Z level=info msg="Executing migration" id="add unique index role.uid" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.351275346Z level=info msg="Migration successfully executed" id="add unique index role.uid" duration=2.877539ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.364002927Z level=info msg="Executing migration" id="create seed assignment table" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.364852046Z level=info msg="Migration successfully executed" id="create seed assignment table" duration=833.069µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.373326214Z level=info msg="Executing migration" id="add unique index builtin_role_role_name" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.374393515Z level=info msg="Migration successfully executed" id="add unique index builtin_role_role_name" duration=1.076831ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.378097933Z level=info msg="Executing migration" id="add column hidden to role table" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.387282777Z level=info msg="Migration successfully executed" id="add column hidden to role table" duration=9.177524ms 11:50:08 kafka | [2024-02-21 11:48:12,215] INFO [Partition __consumer_offsets-40 broker=1] Log loaded for partition __consumer_offsets-40 with initial high watermark 0 (kafka.cluster.Partition) 11:50:08 kafka | [2024-02-21 11:48:12,215] INFO [Broker id=1] Leader __consumer_offsets-40 with topic id Some(Kd_bSyAnRqaVsDMHCxs5MA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,223] INFO [LogLoader partition=__consumer_offsets-47, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:50:08 kafka | [2024-02-21 11:48:12,223] INFO Created log for partition __consumer_offsets-47 in /var/lib/kafka/data/__consumer_offsets-47 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:50:08 kafka | [2024-02-21 11:48:12,223] INFO [Partition __consumer_offsets-47 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-47 (kafka.cluster.Partition) 11:50:08 kafka | [2024-02-21 11:48:12,223] INFO [Partition __consumer_offsets-47 broker=1] Log loaded for partition __consumer_offsets-47 with initial high watermark 0 (kafka.cluster.Partition) 11:50:08 kafka | [2024-02-21 11:48:12,224] INFO [Broker id=1] Leader __consumer_offsets-47 with topic id Some(Kd_bSyAnRqaVsDMHCxs5MA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,236] INFO [LogLoader partition=__consumer_offsets-17, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:50:08 kafka | [2024-02-21 11:48:12,237] INFO Created log for partition __consumer_offsets-17 in /var/lib/kafka/data/__consumer_offsets-17 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:50:08 kafka | [2024-02-21 11:48:12,237] INFO [Partition __consumer_offsets-17 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-17 (kafka.cluster.Partition) 11:50:08 kafka | [2024-02-21 11:48:12,237] INFO [Partition __consumer_offsets-17 broker=1] Log loaded for partition __consumer_offsets-17 with initial high watermark 0 (kafka.cluster.Partition) 11:50:08 kafka | [2024-02-21 11:48:12,237] INFO [Broker id=1] Leader __consumer_offsets-17 with topic id Some(Kd_bSyAnRqaVsDMHCxs5MA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,253] INFO [LogLoader partition=__consumer_offsets-32, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:50:08 kafka | [2024-02-21 11:48:12,254] INFO Created log for partition __consumer_offsets-32 in /var/lib/kafka/data/__consumer_offsets-32 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:50:08 kafka | [2024-02-21 11:48:12,254] INFO [Partition __consumer_offsets-32 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-32 (kafka.cluster.Partition) 11:50:08 kafka | [2024-02-21 11:48:12,254] INFO [Partition __consumer_offsets-32 broker=1] Log loaded for partition __consumer_offsets-32 with initial high watermark 0 (kafka.cluster.Partition) 11:50:08 kafka | [2024-02-21 11:48:12,254] INFO [Broker id=1] Leader __consumer_offsets-32 with topic id Some(Kd_bSyAnRqaVsDMHCxs5MA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,262] INFO [LogLoader partition=__consumer_offsets-37, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:50:08 kafka | [2024-02-21 11:48:12,262] INFO Created log for partition __consumer_offsets-37 in /var/lib/kafka/data/__consumer_offsets-37 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:50:08 kafka | [2024-02-21 11:48:12,262] INFO [Partition __consumer_offsets-37 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-37 (kafka.cluster.Partition) 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.391872575Z level=info msg="Executing migration" id="permission kind migration" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.397742526Z level=info msg="Migration successfully executed" id="permission kind migration" duration=5.869771ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.401806218Z level=info msg="Executing migration" id="permission attribute migration" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.412502578Z level=info msg="Migration successfully executed" id="permission attribute migration" duration=10.70044ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.415995465Z level=info msg="Executing migration" id="permission identifier migration" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.424195039Z level=info msg="Migration successfully executed" id="permission identifier migration" duration=8.199324ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.430749697Z level=info msg="Executing migration" id="add permission identifier index" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.431646715Z level=info msg="Migration successfully executed" id="add permission identifier index" duration=897.048µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.435519656Z level=info msg="Executing migration" id="create query_history table v1" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.437094582Z level=info msg="Migration successfully executed" id="create query_history table v1" duration=1.574186ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.449034356Z level=info msg="Executing migration" id="add index query_history.org_id-created_by-datasource_uid" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.451188437Z level=info msg="Migration successfully executed" id="add index query_history.org_id-created_by-datasource_uid" duration=2.144451ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.457787266Z level=info msg="Executing migration" id="alter table query_history alter column created_by type to bigint" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.458089509Z level=info msg="Migration successfully executed" id="alter table query_history alter column created_by type to bigint" duration=302.603µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.463504295Z level=info msg="Executing migration" id="rbac disabled migrator" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.463722387Z level=info msg="Migration successfully executed" id="rbac disabled migrator" duration=219.372µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.470010662Z level=info msg="Executing migration" id="teams permissions migration" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.471108403Z level=info msg="Migration successfully executed" id="teams permissions migration" duration=1.097541ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.477900974Z level=info msg="Executing migration" id="dashboard permissions" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.478921304Z level=info msg="Migration successfully executed" id="dashboard permissions" duration=1.02162ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.483167898Z level=info msg="Executing migration" id="dashboard permissions uid scopes" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.484282209Z level=info msg="Migration successfully executed" id="dashboard permissions uid scopes" duration=1.114351ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.490709815Z level=info msg="Executing migration" id="drop managed folder create actions" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.490973379Z level=info msg="Migration successfully executed" id="drop managed folder create actions" duration=263.244µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.504695939Z level=info msg="Executing migration" id="alerting notification permissions" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.505384857Z level=info msg="Migration successfully executed" id="alerting notification permissions" duration=688.848µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.517693904Z level=info msg="Executing migration" id="create query_history_star table v1" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.51914276Z level=info msg="Migration successfully executed" id="create query_history_star table v1" duration=1.448226ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.523346652Z level=info msg="Executing migration" id="add index query_history.user_id-query_uid" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.525150721Z level=info msg="Migration successfully executed" id="add index query_history.user_id-query_uid" duration=1.803649ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.529360114Z level=info msg="Executing migration" id="add column org_id in query_history_star" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.537850752Z level=info msg="Migration successfully executed" id="add column org_id in query_history_star" duration=8.490168ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.543056726Z level=info msg="Executing migration" id="alter table query_history_star_mig column user_id type to bigint" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.543280758Z level=info msg="Migration successfully executed" id="alter table query_history_star_mig column user_id type to bigint" duration=230.262µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.548395821Z level=info msg="Executing migration" id="create correlation table v1" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.549504862Z level=info msg="Migration successfully executed" id="create correlation table v1" duration=1.103171ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.553447294Z level=info msg="Executing migration" id="add index correlations.uid" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.554745686Z level=info msg="Migration successfully executed" id="add index correlations.uid" duration=1.298392ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.561317045Z level=info msg="Executing migration" id="add index correlations.source_uid" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.563250884Z level=info msg="Migration successfully executed" id="add index correlations.source_uid" duration=1.933989ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.567291906Z level=info msg="Executing migration" id="add correlation config column" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.577468871Z level=info msg="Migration successfully executed" id="add correlation config column" duration=10.176785ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.586296403Z level=info msg="Executing migration" id="drop index IDX_correlation_uid - v1" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.587446634Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_uid - v1" duration=1.150011ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.592546647Z level=info msg="Executing migration" id="drop index IDX_correlation_source_uid - v1" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.594294175Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_source_uid - v1" duration=1.747638ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.598287576Z level=info msg="Executing migration" id="Rename table correlation to correlation_tmp_qwerty - v1" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.628803991Z level=info msg="Migration successfully executed" id="Rename table correlation to correlation_tmp_qwerty - v1" duration=30.517345ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.635298048Z level=info msg="Executing migration" id="create correlation v2" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.636163757Z level=info msg="Migration successfully executed" id="create correlation v2" duration=865.379µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.641357171Z level=info msg="Executing migration" id="create index IDX_correlation_uid - v2" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.643426532Z level=info msg="Migration successfully executed" id="create index IDX_correlation_uid - v2" duration=2.069741ms 11:50:08 kafka | [2024-02-21 11:48:12,262] INFO [Partition __consumer_offsets-37 broker=1] Log loaded for partition __consumer_offsets-37 with initial high watermark 0 (kafka.cluster.Partition) 11:50:08 kafka | [2024-02-21 11:48:12,262] INFO [Broker id=1] Leader __consumer_offsets-37 with topic id Some(Kd_bSyAnRqaVsDMHCxs5MA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,272] INFO [LogLoader partition=__consumer_offsets-7, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:50:08 kafka | [2024-02-21 11:48:12,273] INFO Created log for partition __consumer_offsets-7 in /var/lib/kafka/data/__consumer_offsets-7 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:50:08 kafka | [2024-02-21 11:48:12,273] INFO [Partition __consumer_offsets-7 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-7 (kafka.cluster.Partition) 11:50:08 kafka | [2024-02-21 11:48:12,273] INFO [Partition __consumer_offsets-7 broker=1] Log loaded for partition __consumer_offsets-7 with initial high watermark 0 (kafka.cluster.Partition) 11:50:08 kafka | [2024-02-21 11:48:12,273] INFO [Broker id=1] Leader __consumer_offsets-7 with topic id Some(Kd_bSyAnRqaVsDMHCxs5MA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,281] INFO [LogLoader partition=__consumer_offsets-22, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:50:08 kafka | [2024-02-21 11:48:12,282] INFO Created log for partition __consumer_offsets-22 in /var/lib/kafka/data/__consumer_offsets-22 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:50:08 kafka | [2024-02-21 11:48:12,282] INFO [Partition __consumer_offsets-22 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-22 (kafka.cluster.Partition) 11:50:08 kafka | [2024-02-21 11:48:12,282] INFO [Partition __consumer_offsets-22 broker=1] Log loaded for partition __consumer_offsets-22 with initial high watermark 0 (kafka.cluster.Partition) 11:50:08 kafka | [2024-02-21 11:48:12,282] INFO [Broker id=1] Leader __consumer_offsets-22 with topic id Some(Kd_bSyAnRqaVsDMHCxs5MA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,291] INFO [LogLoader partition=__consumer_offsets-29, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:50:08 kafka | [2024-02-21 11:48:12,291] INFO Created log for partition __consumer_offsets-29 in /var/lib/kafka/data/__consumer_offsets-29 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:50:08 kafka | [2024-02-21 11:48:12,292] INFO [Partition __consumer_offsets-29 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-29 (kafka.cluster.Partition) 11:50:08 kafka | [2024-02-21 11:48:12,292] INFO [Partition __consumer_offsets-29 broker=1] Log loaded for partition __consumer_offsets-29 with initial high watermark 0 (kafka.cluster.Partition) 11:50:08 kafka | [2024-02-21 11:48:12,292] INFO [Broker id=1] Leader __consumer_offsets-29 with topic id Some(Kd_bSyAnRqaVsDMHCxs5MA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,300] INFO [LogLoader partition=__consumer_offsets-44, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:50:08 kafka | [2024-02-21 11:48:12,301] INFO Created log for partition __consumer_offsets-44 in /var/lib/kafka/data/__consumer_offsets-44 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:50:08 kafka | [2024-02-21 11:48:12,301] INFO [Partition __consumer_offsets-44 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-44 (kafka.cluster.Partition) 11:50:08 kafka | [2024-02-21 11:48:12,301] INFO [Partition __consumer_offsets-44 broker=1] Log loaded for partition __consumer_offsets-44 with initial high watermark 0 (kafka.cluster.Partition) 11:50:08 kafka | [2024-02-21 11:48:12,301] INFO [Broker id=1] Leader __consumer_offsets-44 with topic id Some(Kd_bSyAnRqaVsDMHCxs5MA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,309] INFO [LogLoader partition=__consumer_offsets-14, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:50:08 kafka | [2024-02-21 11:48:12,310] INFO Created log for partition __consumer_offsets-14 in /var/lib/kafka/data/__consumer_offsets-14 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:50:08 kafka | [2024-02-21 11:48:12,310] INFO [Partition __consumer_offsets-14 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-14 (kafka.cluster.Partition) 11:50:08 kafka | [2024-02-21 11:48:12,310] INFO [Partition __consumer_offsets-14 broker=1] Log loaded for partition __consumer_offsets-14 with initial high watermark 0 (kafka.cluster.Partition) 11:50:08 kafka | [2024-02-21 11:48:12,310] INFO [Broker id=1] Leader __consumer_offsets-14 with topic id Some(Kd_bSyAnRqaVsDMHCxs5MA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,318] INFO [LogLoader partition=__consumer_offsets-23, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:50:08 kafka | [2024-02-21 11:48:12,319] INFO Created log for partition __consumer_offsets-23 in /var/lib/kafka/data/__consumer_offsets-23 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:50:08 kafka | [2024-02-21 11:48:12,319] INFO [Partition __consumer_offsets-23 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-23 (kafka.cluster.Partition) 11:50:08 kafka | [2024-02-21 11:48:12,319] INFO [Partition __consumer_offsets-23 broker=1] Log loaded for partition __consumer_offsets-23 with initial high watermark 0 (kafka.cluster.Partition) 11:50:08 kafka | [2024-02-21 11:48:12,319] INFO [Broker id=1] Leader __consumer_offsets-23 with topic id Some(Kd_bSyAnRqaVsDMHCxs5MA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,327] INFO [LogLoader partition=__consumer_offsets-38, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:50:08 kafka | [2024-02-21 11:48:12,328] INFO Created log for partition __consumer_offsets-38 in /var/lib/kafka/data/__consumer_offsets-38 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:50:08 kafka | [2024-02-21 11:48:12,328] INFO [Partition __consumer_offsets-38 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-38 (kafka.cluster.Partition) 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.651635207Z level=info msg="Executing migration" id="create index IDX_correlation_source_uid - v2" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.652996661Z level=info msg="Migration successfully executed" id="create index IDX_correlation_source_uid - v2" duration=1.361274ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.658114614Z level=info msg="Executing migration" id="create index IDX_correlation_org_id - v2" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.660001673Z level=info msg="Migration successfully executed" id="create index IDX_correlation_org_id - v2" duration=1.887079ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.666152087Z level=info msg="Executing migration" id="copy correlation v1 to v2" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.666556331Z level=info msg="Migration successfully executed" id="copy correlation v1 to v2" duration=405.024µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.672537262Z level=info msg="Executing migration" id="drop correlation_tmp_qwerty" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.674089009Z level=info msg="Migration successfully executed" id="drop correlation_tmp_qwerty" duration=1.554717ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.681632277Z level=info msg="Executing migration" id="add provisioning column" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.693046085Z level=info msg="Migration successfully executed" id="add provisioning column" duration=11.411818ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.696721882Z level=info msg="Executing migration" id="create entity_events table" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.697324978Z level=info msg="Migration successfully executed" id="create entity_events table" duration=602.866µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.701201249Z level=info msg="Executing migration" id="create dashboard public config v1" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.70224343Z level=info msg="Migration successfully executed" id="create dashboard public config v1" duration=1.04737ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.708361352Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v1" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.70913483Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index UQE_dashboard_public_config_uid - v1" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.713234083Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.714231143Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.718065533Z level=info msg="Executing migration" id="Drop old dashboard public config table" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.718868191Z level=info msg="Migration successfully executed" id="Drop old dashboard public config table" duration=802.398µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.725454819Z level=info msg="Executing migration" id="recreate dashboard public config v1" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.726734023Z level=info msg="Migration successfully executed" id="recreate dashboard public config v1" duration=1.277334ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.733672954Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v1" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.735424452Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v1" duration=1.750838ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.742444284Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.744427955Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" duration=1.982501ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.750481627Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v2" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.75169553Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_public_config_uid - v2" duration=1.215103ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.756244796Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.757271798Z level=info msg="Migration successfully executed" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.027402ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.805386985Z level=info msg="Executing migration" id="Drop public config table" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.806878949Z level=info msg="Migration successfully executed" id="Drop public config table" duration=1.503165ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.816216266Z level=info msg="Executing migration" id="Recreate dashboard public config v2" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.818132165Z level=info msg="Migration successfully executed" id="Recreate dashboard public config v2" duration=1.916279ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.824246698Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v2" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.825570673Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v2" duration=1.322225ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.832592655Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.834541185Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.94014ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.838217412Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_access_token - v2" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.839429616Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_access_token - v2" duration=1.212024ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.843033953Z level=info msg="Executing migration" id="Rename table dashboard_public_config to dashboard_public - v2" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.876829442Z level=info msg="Migration successfully executed" id="Rename table dashboard_public_config to dashboard_public - v2" duration=33.794609ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.883631032Z level=info msg="Executing migration" id="add annotations_enabled column" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.89414154Z level=info msg="Migration successfully executed" id="add annotations_enabled column" duration=10.508968ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.897600417Z level=info msg="Executing migration" id="add time_selection_enabled column" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.904671009Z level=info msg="Migration successfully executed" id="add time_selection_enabled column" duration=7.070002ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.909401688Z level=info msg="Executing migration" id="delete orphaned public dashboards" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.909740911Z level=info msg="Migration successfully executed" id="delete orphaned public dashboards" duration=338.923µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.918226829Z level=info msg="Executing migration" id="add share column" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.931301574Z level=info msg="Migration successfully executed" id="add share column" duration=13.072175ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.934531998Z level=info msg="Executing migration" id="backfill empty share column fields with default of public" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.93474495Z level=info msg="Migration successfully executed" id="backfill empty share column fields with default of public" duration=210.372µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.940821862Z level=info msg="Executing migration" id="create file table" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.941957073Z level=info msg="Migration successfully executed" id="create file table" duration=1.134621ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.946049406Z level=info msg="Executing migration" id="file table idx: path natural pk" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.949958966Z level=info msg="Migration successfully executed" id="file table idx: path natural pk" duration=3.90747ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.955881467Z level=info msg="Executing migration" id="file table idx: parent_folder_path_hash fast folder retrieval" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.956980729Z level=info msg="Migration successfully executed" id="file table idx: parent_folder_path_hash fast folder retrieval" duration=1.099782ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.96288875Z level=info msg="Executing migration" id="create file_meta table" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.964020321Z level=info msg="Migration successfully executed" id="create file_meta table" duration=1.131251ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.970982034Z level=info msg="Executing migration" id="file table idx: path key" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.972301507Z level=info msg="Migration successfully executed" id="file table idx: path key" duration=1.318813ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.976590282Z level=info msg="Executing migration" id="set path collation in file table" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.976774874Z level=info msg="Migration successfully executed" id="set path collation in file table" duration=185.122µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.981737485Z level=info msg="Executing migration" id="migrate contents column to mediumblob for MySQL" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.981960157Z level=info msg="Migration successfully executed" id="migrate contents column to mediumblob for MySQL" duration=222.592µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.987578825Z level=info msg="Executing migration" id="managed permissions migration" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.988212301Z level=info msg="Migration successfully executed" id="managed permissions migration" duration=637.146µs 11:50:08 kafka | [2024-02-21 11:48:12,329] INFO [Partition __consumer_offsets-38 broker=1] Log loaded for partition __consumer_offsets-38 with initial high watermark 0 (kafka.cluster.Partition) 11:50:08 kafka | [2024-02-21 11:48:12,329] INFO [Broker id=1] Leader __consumer_offsets-38 with topic id Some(Kd_bSyAnRqaVsDMHCxs5MA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,336] INFO [LogLoader partition=__consumer_offsets-8, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:50:08 kafka | [2024-02-21 11:48:12,337] INFO Created log for partition __consumer_offsets-8 in /var/lib/kafka/data/__consumer_offsets-8 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:50:08 kafka | [2024-02-21 11:48:12,337] INFO [Partition __consumer_offsets-8 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-8 (kafka.cluster.Partition) 11:50:08 kafka | [2024-02-21 11:48:12,337] INFO [Partition __consumer_offsets-8 broker=1] Log loaded for partition __consumer_offsets-8 with initial high watermark 0 (kafka.cluster.Partition) 11:50:08 kafka | [2024-02-21 11:48:12,338] INFO [Broker id=1] Leader __consumer_offsets-8 with topic id Some(Kd_bSyAnRqaVsDMHCxs5MA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,346] INFO [LogLoader partition=policy-pdp-pap-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:50:08 kafka | [2024-02-21 11:48:12,347] INFO Created log for partition policy-pdp-pap-0 in /var/lib/kafka/data/policy-pdp-pap-0 with properties {} (kafka.log.LogManager) 11:50:08 kafka | [2024-02-21 11:48:12,347] INFO [Partition policy-pdp-pap-0 broker=1] No checkpointed highwatermark is found for partition policy-pdp-pap-0 (kafka.cluster.Partition) 11:50:08 kafka | [2024-02-21 11:48:12,347] INFO [Partition policy-pdp-pap-0 broker=1] Log loaded for partition policy-pdp-pap-0 with initial high watermark 0 (kafka.cluster.Partition) 11:50:08 kafka | [2024-02-21 11:48:12,347] INFO [Broker id=1] Leader policy-pdp-pap-0 with topic id Some(k7zjIZnaTc-jGITsmuEMwA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,355] INFO [LogLoader partition=__consumer_offsets-45, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:50:08 kafka | [2024-02-21 11:48:12,355] INFO Created log for partition __consumer_offsets-45 in /var/lib/kafka/data/__consumer_offsets-45 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:50:08 kafka | [2024-02-21 11:48:12,355] INFO [Partition __consumer_offsets-45 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-45 (kafka.cluster.Partition) 11:50:08 kafka | [2024-02-21 11:48:12,356] INFO [Partition __consumer_offsets-45 broker=1] Log loaded for partition __consumer_offsets-45 with initial high watermark 0 (kafka.cluster.Partition) 11:50:08 kafka | [2024-02-21 11:48:12,356] INFO [Broker id=1] Leader __consumer_offsets-45 with topic id Some(Kd_bSyAnRqaVsDMHCxs5MA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,366] INFO [LogLoader partition=__consumer_offsets-15, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:50:08 kafka | [2024-02-21 11:48:12,367] INFO Created log for partition __consumer_offsets-15 in /var/lib/kafka/data/__consumer_offsets-15 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:50:08 kafka | [2024-02-21 11:48:12,367] INFO [Partition __consumer_offsets-15 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-15 (kafka.cluster.Partition) 11:50:08 kafka | [2024-02-21 11:48:12,367] INFO [Partition __consumer_offsets-15 broker=1] Log loaded for partition __consumer_offsets-15 with initial high watermark 0 (kafka.cluster.Partition) 11:50:08 kafka | [2024-02-21 11:48:12,368] INFO [Broker id=1] Leader __consumer_offsets-15 with topic id Some(Kd_bSyAnRqaVsDMHCxs5MA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,374] INFO [LogLoader partition=__consumer_offsets-30, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:50:08 kafka | [2024-02-21 11:48:12,375] INFO Created log for partition __consumer_offsets-30 in /var/lib/kafka/data/__consumer_offsets-30 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:50:08 kafka | [2024-02-21 11:48:12,375] INFO [Partition __consumer_offsets-30 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-30 (kafka.cluster.Partition) 11:50:08 kafka | [2024-02-21 11:48:12,375] INFO [Partition __consumer_offsets-30 broker=1] Log loaded for partition __consumer_offsets-30 with initial high watermark 0 (kafka.cluster.Partition) 11:50:08 kafka | [2024-02-21 11:48:12,376] INFO [Broker id=1] Leader __consumer_offsets-30 with topic id Some(Kd_bSyAnRqaVsDMHCxs5MA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,383] INFO [LogLoader partition=__consumer_offsets-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:50:08 kafka | [2024-02-21 11:48:12,384] INFO Created log for partition __consumer_offsets-0 in /var/lib/kafka/data/__consumer_offsets-0 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:50:08 kafka | [2024-02-21 11:48:12,384] INFO [Partition __consumer_offsets-0 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-0 (kafka.cluster.Partition) 11:50:08 kafka | [2024-02-21 11:48:12,384] INFO [Partition __consumer_offsets-0 broker=1] Log loaded for partition __consumer_offsets-0 with initial high watermark 0 (kafka.cluster.Partition) 11:50:08 kafka | [2024-02-21 11:48:12,385] INFO [Broker id=1] Leader __consumer_offsets-0 with topic id Some(Kd_bSyAnRqaVsDMHCxs5MA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,392] INFO [LogLoader partition=__consumer_offsets-35, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:50:08 kafka | [2024-02-21 11:48:12,393] INFO Created log for partition __consumer_offsets-35 in /var/lib/kafka/data/__consumer_offsets-35 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:50:08 kafka | [2024-02-21 11:48:12,393] INFO [Partition __consumer_offsets-35 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-35 (kafka.cluster.Partition) 11:50:08 kafka | [2024-02-21 11:48:12,393] INFO [Partition __consumer_offsets-35 broker=1] Log loaded for partition __consumer_offsets-35 with initial high watermark 0 (kafka.cluster.Partition) 11:50:08 kafka | [2024-02-21 11:48:12,393] INFO [Broker id=1] Leader __consumer_offsets-35 with topic id Some(Kd_bSyAnRqaVsDMHCxs5MA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,406] INFO [LogLoader partition=__consumer_offsets-5, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:50:08 kafka | [2024-02-21 11:48:12,406] INFO Created log for partition __consumer_offsets-5 in /var/lib/kafka/data/__consumer_offsets-5 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:50:08 kafka | [2024-02-21 11:48:12,407] INFO [Partition __consumer_offsets-5 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-5 (kafka.cluster.Partition) 11:50:08 kafka | [2024-02-21 11:48:12,407] INFO [Partition __consumer_offsets-5 broker=1] Log loaded for partition __consumer_offsets-5 with initial high watermark 0 (kafka.cluster.Partition) 11:50:08 kafka | [2024-02-21 11:48:12,407] INFO [Broker id=1] Leader __consumer_offsets-5 with topic id Some(Kd_bSyAnRqaVsDMHCxs5MA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,414] INFO [LogLoader partition=__consumer_offsets-20, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:50:08 kafka | [2024-02-21 11:48:12,415] INFO Created log for partition __consumer_offsets-20 in /var/lib/kafka/data/__consumer_offsets-20 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:50:08 kafka | [2024-02-21 11:48:12,415] INFO [Partition __consumer_offsets-20 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-20 (kafka.cluster.Partition) 11:50:08 kafka | [2024-02-21 11:48:12,415] INFO [Partition __consumer_offsets-20 broker=1] Log loaded for partition __consumer_offsets-20 with initial high watermark 0 (kafka.cluster.Partition) 11:50:08 kafka | [2024-02-21 11:48:12,415] INFO [Broker id=1] Leader __consumer_offsets-20 with topic id Some(Kd_bSyAnRqaVsDMHCxs5MA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,424] INFO [LogLoader partition=__consumer_offsets-27, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:50:08 kafka | [2024-02-21 11:48:12,424] INFO Created log for partition __consumer_offsets-27 in /var/lib/kafka/data/__consumer_offsets-27 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:50:08 kafka | [2024-02-21 11:48:12,425] INFO [Partition __consumer_offsets-27 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-27 (kafka.cluster.Partition) 11:50:08 kafka | [2024-02-21 11:48:12,425] INFO [Partition __consumer_offsets-27 broker=1] Log loaded for partition __consumer_offsets-27 with initial high watermark 0 (kafka.cluster.Partition) 11:50:08 kafka | [2024-02-21 11:48:12,425] INFO [Broker id=1] Leader __consumer_offsets-27 with topic id Some(Kd_bSyAnRqaVsDMHCxs5MA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,433] INFO [LogLoader partition=__consumer_offsets-42, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:50:08 kafka | [2024-02-21 11:48:12,434] INFO Created log for partition __consumer_offsets-42 in /var/lib/kafka/data/__consumer_offsets-42 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:50:08 kafka | [2024-02-21 11:48:12,434] INFO [Partition __consumer_offsets-42 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-42 (kafka.cluster.Partition) 11:50:08 kafka | [2024-02-21 11:48:12,434] INFO [Partition __consumer_offsets-42 broker=1] Log loaded for partition __consumer_offsets-42 with initial high watermark 0 (kafka.cluster.Partition) 11:50:08 kafka | [2024-02-21 11:48:12,435] INFO [Broker id=1] Leader __consumer_offsets-42 with topic id Some(Kd_bSyAnRqaVsDMHCxs5MA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,441] INFO [LogLoader partition=__consumer_offsets-12, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:50:08 kafka | [2024-02-21 11:48:12,442] INFO Created log for partition __consumer_offsets-12 in /var/lib/kafka/data/__consumer_offsets-12 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:50:08 kafka | [2024-02-21 11:48:12,442] INFO [Partition __consumer_offsets-12 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-12 (kafka.cluster.Partition) 11:50:08 kafka | [2024-02-21 11:48:12,442] INFO [Partition __consumer_offsets-12 broker=1] Log loaded for partition __consumer_offsets-12 with initial high watermark 0 (kafka.cluster.Partition) 11:50:08 kafka | [2024-02-21 11:48:12,442] INFO [Broker id=1] Leader __consumer_offsets-12 with topic id Some(Kd_bSyAnRqaVsDMHCxs5MA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,453] INFO [LogLoader partition=__consumer_offsets-21, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:50:08 kafka | [2024-02-21 11:48:12,454] INFO Created log for partition __consumer_offsets-21 in /var/lib/kafka/data/__consumer_offsets-21 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:50:08 kafka | [2024-02-21 11:48:12,454] INFO [Partition __consumer_offsets-21 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-21 (kafka.cluster.Partition) 11:50:08 kafka | [2024-02-21 11:48:12,455] INFO [Partition __consumer_offsets-21 broker=1] Log loaded for partition __consumer_offsets-21 with initial high watermark 0 (kafka.cluster.Partition) 11:50:08 kafka | [2024-02-21 11:48:12,455] INFO [Broker id=1] Leader __consumer_offsets-21 with topic id Some(Kd_bSyAnRqaVsDMHCxs5MA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,461] INFO [LogLoader partition=__consumer_offsets-36, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:50:08 kafka | [2024-02-21 11:48:12,462] INFO Created log for partition __consumer_offsets-36 in /var/lib/kafka/data/__consumer_offsets-36 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:50:08 kafka | [2024-02-21 11:48:12,462] INFO [Partition __consumer_offsets-36 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-36 (kafka.cluster.Partition) 11:50:08 kafka | [2024-02-21 11:48:12,462] INFO [Partition __consumer_offsets-36 broker=1] Log loaded for partition __consumer_offsets-36 with initial high watermark 0 (kafka.cluster.Partition) 11:50:08 kafka | [2024-02-21 11:48:12,463] INFO [Broker id=1] Leader __consumer_offsets-36 with topic id Some(Kd_bSyAnRqaVsDMHCxs5MA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,471] INFO [LogLoader partition=__consumer_offsets-6, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:50:08 kafka | [2024-02-21 11:48:12,471] INFO Created log for partition __consumer_offsets-6 in /var/lib/kafka/data/__consumer_offsets-6 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:50:08 kafka | [2024-02-21 11:48:12,472] INFO [Partition __consumer_offsets-6 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-6 (kafka.cluster.Partition) 11:50:08 kafka | [2024-02-21 11:48:12,472] INFO [Partition __consumer_offsets-6 broker=1] Log loaded for partition __consumer_offsets-6 with initial high watermark 0 (kafka.cluster.Partition) 11:50:08 kafka | [2024-02-21 11:48:12,472] INFO [Broker id=1] Leader __consumer_offsets-6 with topic id Some(Kd_bSyAnRqaVsDMHCxs5MA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,479] INFO [LogLoader partition=__consumer_offsets-43, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:50:08 kafka | [2024-02-21 11:48:12,480] INFO Created log for partition __consumer_offsets-43 in /var/lib/kafka/data/__consumer_offsets-43 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:50:08 kafka | [2024-02-21 11:48:12,480] INFO [Partition __consumer_offsets-43 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-43 (kafka.cluster.Partition) 11:50:08 kafka | [2024-02-21 11:48:12,480] INFO [Partition __consumer_offsets-43 broker=1] Log loaded for partition __consumer_offsets-43 with initial high watermark 0 (kafka.cluster.Partition) 11:50:08 kafka | [2024-02-21 11:48:12,481] INFO [Broker id=1] Leader __consumer_offsets-43 with topic id Some(Kd_bSyAnRqaVsDMHCxs5MA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,487] INFO [LogLoader partition=__consumer_offsets-13, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:50:08 kafka | [2024-02-21 11:48:12,488] INFO Created log for partition __consumer_offsets-13 in /var/lib/kafka/data/__consumer_offsets-13 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:50:08 kafka | [2024-02-21 11:48:12,488] INFO [Partition __consumer_offsets-13 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-13 (kafka.cluster.Partition) 11:50:08 kafka | [2024-02-21 11:48:12,488] INFO [Partition __consumer_offsets-13 broker=1] Log loaded for partition __consumer_offsets-13 with initial high watermark 0 (kafka.cluster.Partition) 11:50:08 kafka | [2024-02-21 11:48:12,488] INFO [Broker id=1] Leader __consumer_offsets-13 with topic id Some(Kd_bSyAnRqaVsDMHCxs5MA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,496] INFO [LogLoader partition=__consumer_offsets-28, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 11:50:08 kafka | [2024-02-21 11:48:12,497] INFO Created log for partition __consumer_offsets-28 in /var/lib/kafka/data/__consumer_offsets-28 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 11:50:08 kafka | [2024-02-21 11:48:12,497] INFO [Partition __consumer_offsets-28 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-28 (kafka.cluster.Partition) 11:50:08 kafka | [2024-02-21 11:48:12,497] INFO [Partition __consumer_offsets-28 broker=1] Log loaded for partition __consumer_offsets-28 with initial high watermark 0 (kafka.cluster.Partition) 11:50:08 kafka | [2024-02-21 11:48:12,497] INFO [Broker id=1] Leader __consumer_offsets-28 with topic id Some(Kd_bSyAnRqaVsDMHCxs5MA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,503] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-3 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,504] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-18 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,504] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-41 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,504] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-10 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,504] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-33 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,504] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-48 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,504] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-19 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,504] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-34 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,504] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-4 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,504] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-11 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,504] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-26 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,504] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-49 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,504] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-39 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,504] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-9 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,504] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-24 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,505] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-31 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,505] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-46 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,505] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-1 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,505] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-16 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,505] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-2 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,505] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-25 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,505] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-40 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,506] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-47 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,506] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-17 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,506] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-32 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,506] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-37 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,506] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-7 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,507] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-22 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,507] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-29 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,507] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-44 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,507] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-14 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,507] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-23 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,507] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-38 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,507] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-8 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,507] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,508] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-45 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,508] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-15 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,508] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-30 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,508] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-0 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,508] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-35 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,508] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-5 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,508] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-20 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,509] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-27 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,509] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-42 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,509] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-12 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,509] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-21 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,509] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-36 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,509] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-6 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,509] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-43 (state.change.logger) 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.992114271Z level=info msg="Executing migration" id="managed folder permissions alert actions migration" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.992347574Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions migration" duration=232.853µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.99579846Z level=info msg="Executing migration" id="RBAC action name migrator" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:45.996647319Z level=info msg="Migration successfully executed" id="RBAC action name migrator" duration=848.889µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:46.00071Z level=info msg="Executing migration" id="Add UID column to playlist" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:46.015061189Z level=info msg="Migration successfully executed" id="Add UID column to playlist" duration=14.351479ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:46.020692677Z level=info msg="Executing migration" id="Update uid column values in playlist" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:46.021089541Z level=info msg="Migration successfully executed" id="Update uid column values in playlist" duration=400.064µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:46.024516156Z level=info msg="Executing migration" id="Add index for uid in playlist" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:46.02676828Z level=info msg="Migration successfully executed" id="Add index for uid in playlist" duration=2.246654ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:46.030168304Z level=info msg="Executing migration" id="update group index for alert rules" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:46.030634579Z level=info msg="Migration successfully executed" id="update group index for alert rules" duration=467.405µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:46.03553233Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated migration" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:46.035865743Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated migration" duration=332.273µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:46.039256218Z level=info msg="Executing migration" id="admin only folder/dashboard permission" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:46.040141918Z level=info msg="Migration successfully executed" id="admin only folder/dashboard permission" duration=885.58µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:46.045083418Z level=info msg="Executing migration" id="add action column to seed_assignment" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:46.054139232Z level=info msg="Migration successfully executed" id="add action column to seed_assignment" duration=9.054874ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:46.059060142Z level=info msg="Executing migration" id="add scope column to seed_assignment" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:46.067792552Z level=info msg="Migration successfully executed" id="add scope column to seed_assignment" duration=8.73428ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:46.071194238Z level=info msg="Executing migration" id="remove unique index builtin_role_role_name before nullable update" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:46.07231193Z level=info msg="Migration successfully executed" id="remove unique index builtin_role_role_name before nullable update" duration=1.117312ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:46.075700174Z level=info msg="Executing migration" id="update seed_assignment role_name column to nullable" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:46.182473886Z level=info msg="Migration successfully executed" id="update seed_assignment role_name column to nullable" duration=106.774152ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:46.187628699Z level=info msg="Executing migration" id="add unique index builtin_role_name back" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:46.188706221Z level=info msg="Migration successfully executed" id="add unique index builtin_role_name back" duration=1.076682ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:46.191912004Z level=info msg="Executing migration" id="add unique index builtin_role_action_scope" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:46.193043215Z level=info msg="Migration successfully executed" id="add unique index builtin_role_action_scope" duration=1.130751ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:46.197711754Z level=info msg="Executing migration" id="add primary key to seed_assigment" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:46.236067849Z level=info msg="Migration successfully executed" id="add primary key to seed_assigment" duration=38.357275ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:46.242139922Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated fixed migration" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:46.242337804Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated fixed migration" duration=198.992µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:46.245990402Z level=info msg="Executing migration" id="managed folder permissions library panel actions migration" 11:50:08 kafka | [2024-02-21 11:48:12,509] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-13 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,509] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-28 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,518] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 3 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:50:08 kafka | [2024-02-21 11:48:12,519] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-3 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:50:08 kafka | [2024-02-21 11:48:12,520] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 18 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:50:08 kafka | [2024-02-21 11:48:12,520] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-18 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:50:08 kafka | [2024-02-21 11:48:12,520] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 41 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:50:08 kafka | [2024-02-21 11:48:12,520] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-41 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:50:08 kafka | [2024-02-21 11:48:12,520] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 10 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:50:08 kafka | [2024-02-21 11:48:12,520] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-10 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:50:08 kafka | [2024-02-21 11:48:12,520] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 33 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:50:08 kafka | [2024-02-21 11:48:12,520] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-33 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:50:08 kafka | [2024-02-21 11:48:12,520] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 48 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:50:08 kafka | [2024-02-21 11:48:12,520] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-48 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:50:08 kafka | [2024-02-21 11:48:12,521] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 19 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:50:08 kafka | [2024-02-21 11:48:12,521] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-19 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:50:08 kafka | [2024-02-21 11:48:12,521] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 34 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:50:08 kafka | [2024-02-21 11:48:12,521] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-34 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:50:08 kafka | [2024-02-21 11:48:12,521] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 4 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:50:08 kafka | [2024-02-21 11:48:12,521] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-4 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:50:08 kafka | [2024-02-21 11:48:12,521] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 11 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:50:08 kafka | [2024-02-21 11:48:12,521] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-11 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:50:08 kafka | [2024-02-21 11:48:12,521] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 26 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:50:08 kafka | [2024-02-21 11:48:12,521] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-26 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:50:08 kafka | [2024-02-21 11:48:12,521] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 49 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:50:08 kafka | [2024-02-21 11:48:12,521] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-49 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:50:08 kafka | [2024-02-21 11:48:12,521] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 39 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:50:08 kafka | [2024-02-21 11:48:12,521] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-39 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:50:08 kafka | [2024-02-21 11:48:12,522] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 9 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:50:08 kafka | [2024-02-21 11:48:12,522] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-9 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:50:08 kafka | [2024-02-21 11:48:12,522] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 24 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:50:08 kafka | [2024-02-21 11:48:12,522] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-24 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:50:08 kafka | [2024-02-21 11:48:12,522] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 31 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:50:08 kafka | [2024-02-21 11:48:12,522] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-31 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:50:08 kafka | [2024-02-21 11:48:12,522] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 46 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:50:08 kafka | [2024-02-21 11:48:12,522] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-46 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:50:08 kafka | [2024-02-21 11:48:12,522] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 1 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:50:08 kafka | [2024-02-21 11:48:12,522] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-1 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:50:08 kafka | [2024-02-21 11:48:12,522] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 16 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:50:08 kafka | [2024-02-21 11:48:12,522] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-16 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:50:08 kafka | [2024-02-21 11:48:12,522] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 2 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:50:08 kafka | [2024-02-21 11:48:12,522] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-2 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:50:08 kafka | [2024-02-21 11:48:12,522] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 25 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:50:08 kafka | [2024-02-21 11:48:12,523] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-25 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:50:08 kafka | [2024-02-21 11:48:12,523] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 40 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:50:08 kafka | [2024-02-21 11:48:12,523] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-40 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:50:08 kafka | [2024-02-21 11:48:12,523] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 47 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:50:08 kafka | [2024-02-21 11:48:12,523] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-47 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:50:08 kafka | [2024-02-21 11:48:12,523] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 17 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:50:08 kafka | [2024-02-21 11:48:12,523] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-17 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:50:08 kafka | [2024-02-21 11:48:12,523] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 32 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:50:08 kafka | [2024-02-21 11:48:12,523] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-32 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:50:08 kafka | [2024-02-21 11:48:12,523] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 37 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:50:08 kafka | [2024-02-21 11:48:12,523] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-37 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:50:08 kafka | [2024-02-21 11:48:12,523] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 7 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:50:08 kafka | [2024-02-21 11:48:12,523] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-7 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:50:08 kafka | [2024-02-21 11:48:12,523] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 22 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:50:08 kafka | [2024-02-21 11:48:12,523] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-3 in 4 milliseconds for epoch 0, of which 1 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:50:08 kafka | [2024-02-21 11:48:12,524] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-22 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:50:08 kafka | [2024-02-21 11:48:12,524] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 29 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:50:08 kafka | [2024-02-21 11:48:12,524] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-18 in 4 milliseconds for epoch 0, of which 4 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:50:08 kafka | [2024-02-21 11:48:12,524] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-29 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:50:08 kafka | [2024-02-21 11:48:12,524] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-41 in 4 milliseconds for epoch 0, of which 4 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:50:08 kafka | [2024-02-21 11:48:12,524] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 44 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:50:08 kafka | [2024-02-21 11:48:12,524] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-44 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:50:08 kafka | [2024-02-21 11:48:12,524] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-10 in 4 milliseconds for epoch 0, of which 4 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:50:08 kafka | [2024-02-21 11:48:12,524] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 14 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:50:08 kafka | [2024-02-21 11:48:12,524] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-33 in 4 milliseconds for epoch 0, of which 4 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:50:08 kafka | [2024-02-21 11:48:12,524] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-14 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:50:08 kafka | [2024-02-21 11:48:12,524] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-48 in 3 milliseconds for epoch 0, of which 3 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:50:08 kafka | [2024-02-21 11:48:12,524] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 23 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:50:08 kafka | [2024-02-21 11:48:12,524] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-19 in 3 milliseconds for epoch 0, of which 3 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:50:08 kafka | [2024-02-21 11:48:12,524] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-23 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:50:08 kafka | [2024-02-21 11:48:12,525] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-34 in 3 milliseconds for epoch 0, of which 3 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:50:08 kafka | [2024-02-21 11:48:12,525] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 38 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:50:08 kafka | [2024-02-21 11:48:12,525] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-4 in 4 milliseconds for epoch 0, of which 4 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:50:08 kafka | [2024-02-21 11:48:12,525] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-38 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:50:08 kafka | [2024-02-21 11:48:12,525] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-11 in 4 milliseconds for epoch 0, of which 4 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:50:08 kafka | [2024-02-21 11:48:12,525] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 8 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:50:08 kafka | [2024-02-21 11:48:12,525] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-26 in 4 milliseconds for epoch 0, of which 4 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:46.246139463Z level=info msg="Migration successfully executed" id="managed folder permissions library panel actions migration" duration=149.201µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:46.249314726Z level=info msg="Executing migration" id="migrate external alertmanagers to datsourcse" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:46.249667379Z level=info msg="Migration successfully executed" id="migrate external alertmanagers to datsourcse" duration=353.043µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:46.256816084Z level=info msg="Executing migration" id="create folder table" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:46.258371239Z level=info msg="Migration successfully executed" id="create folder table" duration=1.556996ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:46.264672775Z level=info msg="Executing migration" id="Add index for parent_uid" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:46.265910527Z level=info msg="Migration successfully executed" id="Add index for parent_uid" duration=1.237302ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:46.269827678Z level=info msg="Executing migration" id="Add unique index for folder.uid and folder.org_id" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:46.270937449Z level=info msg="Migration successfully executed" id="Add unique index for folder.uid and folder.org_id" duration=1.109651ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:46.276351545Z level=info msg="Executing migration" id="Update folder title length" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:46.276380035Z level=info msg="Migration successfully executed" id="Update folder title length" duration=29.3µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:46.279342806Z level=info msg="Executing migration" id="Add unique index for folder.title and folder.parent_uid" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:46.280543078Z level=info msg="Migration successfully executed" id="Add unique index for folder.title and folder.parent_uid" duration=1.195502ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:46.28371657Z level=info msg="Executing migration" id="Remove unique index for folder.title and folder.parent_uid" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:46.284825133Z level=info msg="Migration successfully executed" id="Remove unique index for folder.title and folder.parent_uid" duration=1.108573ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:46.29038738Z level=info msg="Executing migration" id="Add unique index for title, parent_uid, and org_id" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:46.29231345Z level=info msg="Migration successfully executed" id="Add unique index for title, parent_uid, and org_id" duration=1.92562ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:46.29617234Z level=info msg="Executing migration" id="Sync dashboard and folder table" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:46.297041798Z level=info msg="Migration successfully executed" id="Sync dashboard and folder table" duration=868.368µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:46.300366072Z level=info msg="Executing migration" id="Remove ghost folders from the folder table" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:46.300719127Z level=info msg="Migration successfully executed" id="Remove ghost folders from the folder table" duration=352.535µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:46.303750948Z level=info msg="Executing migration" id="create anon_device table" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:46.304707988Z level=info msg="Migration successfully executed" id="create anon_device table" duration=957.16µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:46.310917722Z level=info msg="Executing migration" id="add unique index anon_device.device_id" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:46.312245906Z level=info msg="Migration successfully executed" id="add unique index anon_device.device_id" duration=1.328124ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:46.316699321Z level=info msg="Executing migration" id="add index anon_device.updated_at" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:46.317700722Z level=info msg="Migration successfully executed" id="add index anon_device.updated_at" duration=1.003211ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:46.324776225Z level=info msg="Executing migration" id="create signing_key table" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:46.326366101Z level=info msg="Migration successfully executed" id="create signing_key table" duration=1.588536ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:46.332765547Z level=info msg="Executing migration" id="add unique index signing_key.key_id" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:46.334091931Z level=info msg="Migration successfully executed" id="add unique index signing_key.key_id" duration=1.326143ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:46.338337365Z level=info msg="Executing migration" id="set legacy alert migration status in kvstore" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:46.339855561Z level=info msg="Migration successfully executed" id="set legacy alert migration status in kvstore" duration=1.518426ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:46.348630301Z level=info msg="Executing migration" id="migrate record of created folders during legacy migration to kvstore" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:46.349236537Z level=info msg="Migration successfully executed" id="migrate record of created folders during legacy migration to kvstore" duration=612.356µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:46.354002417Z level=info msg="Executing migration" id="Add folder_uid for dashboard" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:46.363530805Z level=info msg="Migration successfully executed" id="Add folder_uid for dashboard" duration=9.525178ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:46.369131892Z level=info msg="Executing migration" id="Populate dashboard folder_uid column" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:46.370028781Z level=info msg="Migration successfully executed" id="Populate dashboard folder_uid column" duration=898.759µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:46.374861932Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:46.376468419Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title" duration=1.606016ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:46.381413029Z level=info msg="Executing migration" id="create sso_setting table" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:46.382572291Z level=info msg="Migration successfully executed" id="create sso_setting table" duration=1.160942ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:46.386959156Z level=info msg="Executing migration" id="copy kvstore migration status to each org" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:46.38825666Z level=info msg="Migration successfully executed" id="copy kvstore migration status to each org" duration=1.303224ms 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:46.395705496Z level=info msg="Executing migration" id="add back entry for orgid=0 migrated status" 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:46.396050531Z level=info msg="Migration successfully executed" id="add back entry for orgid=0 migrated status" duration=348.745µs 11:50:08 grafana | logger=migrator t=2024-02-21T11:47:46.400976491Z level=info msg="migrations completed" performed=526 skipped=0 duration=4.68344905s 11:50:08 grafana | logger=sqlstore t=2024-02-21T11:47:46.413694193Z level=info msg="Created default admin" user=admin 11:50:08 grafana | logger=sqlstore t=2024-02-21T11:47:46.414052226Z level=info msg="Created default organization" 11:50:08 grafana | logger=secrets t=2024-02-21T11:47:46.419155349Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1 11:50:08 grafana | logger=plugin.store t=2024-02-21T11:47:46.44254514Z level=info msg="Loading plugins..." 11:50:08 kafka | [2024-02-21 11:48:12,525] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-8 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:50:08 kafka | [2024-02-21 11:48:12,525] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-49 in 4 milliseconds for epoch 0, of which 4 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:50:08 kafka | [2024-02-21 11:48:12,525] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 45 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:50:08 kafka | [2024-02-21 11:48:12,525] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-39 in 4 milliseconds for epoch 0, of which 4 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:50:08 kafka | [2024-02-21 11:48:12,525] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-45 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:50:08 kafka | [2024-02-21 11:48:12,525] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-9 in 3 milliseconds for epoch 0, of which 3 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:50:08 kafka | [2024-02-21 11:48:12,525] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 15 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:50:08 kafka | [2024-02-21 11:48:12,525] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-24 in 3 milliseconds for epoch 0, of which 3 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:50:08 kafka | [2024-02-21 11:48:12,525] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-15 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:50:08 kafka | [2024-02-21 11:48:12,525] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-31 in 3 milliseconds for epoch 0, of which 3 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:50:08 kafka | [2024-02-21 11:48:12,525] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 30 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:50:08 kafka | [2024-02-21 11:48:12,525] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-46 in 3 milliseconds for epoch 0, of which 3 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:50:08 kafka | [2024-02-21 11:48:12,525] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-30 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:50:08 kafka | [2024-02-21 11:48:12,526] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-1 in 4 milliseconds for epoch 0, of which 3 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:50:08 kafka | [2024-02-21 11:48:12,526] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 0 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:50:08 kafka | [2024-02-21 11:48:12,526] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-16 in 4 milliseconds for epoch 0, of which 4 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:50:08 kafka | [2024-02-21 11:48:12,526] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-0 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:50:08 kafka | [2024-02-21 11:48:12,526] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-2 in 4 milliseconds for epoch 0, of which 4 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:50:08 kafka | [2024-02-21 11:48:12,526] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 35 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:50:08 kafka | [2024-02-21 11:48:12,526] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-25 in 3 milliseconds for epoch 0, of which 3 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:50:08 kafka | [2024-02-21 11:48:12,526] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-35 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:50:08 kafka | [2024-02-21 11:48:12,526] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-40 in 3 milliseconds for epoch 0, of which 3 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:50:08 kafka | [2024-02-21 11:48:12,526] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 5 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:50:08 kafka | [2024-02-21 11:48:12,526] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-47 in 3 milliseconds for epoch 0, of which 3 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:50:08 kafka | [2024-02-21 11:48:12,526] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-5 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:50:08 kafka | [2024-02-21 11:48:12,526] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-17 in 3 milliseconds for epoch 0, of which 3 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:50:08 kafka | [2024-02-21 11:48:12,526] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 20 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:50:08 kafka | [2024-02-21 11:48:12,526] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-32 in 3 milliseconds for epoch 0, of which 3 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:50:08 kafka | [2024-02-21 11:48:12,526] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-20 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:50:08 kafka | [2024-02-21 11:48:12,526] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-37 in 3 milliseconds for epoch 0, of which 3 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:50:08 kafka | [2024-02-21 11:48:12,526] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 27 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:50:08 kafka | [2024-02-21 11:48:12,526] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-7 in 3 milliseconds for epoch 0, of which 3 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:50:08 kafka | [2024-02-21 11:48:12,526] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-27 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:50:08 kafka | [2024-02-21 11:48:12,526] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-22 in 2 milliseconds for epoch 0, of which 2 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:50:08 kafka | [2024-02-21 11:48:12,527] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 42 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:50:08 kafka | [2024-02-21 11:48:12,527] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-29 in 3 milliseconds for epoch 0, of which 3 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:50:08 kafka | [2024-02-21 11:48:12,527] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-42 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:50:08 kafka | [2024-02-21 11:48:12,527] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-44 in 3 milliseconds for epoch 0, of which 3 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:50:08 kafka | [2024-02-21 11:48:12,527] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 12 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:50:08 kafka | [2024-02-21 11:48:12,527] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-14 in 3 milliseconds for epoch 0, of which 3 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:50:08 kafka | [2024-02-21 11:48:12,527] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-12 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:50:08 kafka | [2024-02-21 11:48:12,527] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-23 in 3 milliseconds for epoch 0, of which 3 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:50:08 kafka | [2024-02-21 11:48:12,527] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 21 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:50:08 kafka | [2024-02-21 11:48:12,527] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-38 in 2 milliseconds for epoch 0, of which 2 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:50:08 kafka | [2024-02-21 11:48:12,527] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-21 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:50:08 kafka | [2024-02-21 11:48:12,527] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-8 in 2 milliseconds for epoch 0, of which 2 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:50:08 kafka | [2024-02-21 11:48:12,527] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 36 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:50:08 kafka | [2024-02-21 11:48:12,527] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-45 in 2 milliseconds for epoch 0, of which 2 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:50:08 kafka | [2024-02-21 11:48:12,527] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-36 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:50:08 kafka | [2024-02-21 11:48:12,527] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-15 in 2 milliseconds for epoch 0, of which 2 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:50:08 kafka | [2024-02-21 11:48:12,527] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 6 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:50:08 kafka | [2024-02-21 11:48:12,527] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-30 in 1 milliseconds for epoch 0, of which 1 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:50:08 kafka | [2024-02-21 11:48:12,527] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-6 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:50:08 kafka | [2024-02-21 11:48:12,527] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-0 in 1 milliseconds for epoch 0, of which 1 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:50:08 kafka | [2024-02-21 11:48:12,527] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 43 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:50:08 kafka | [2024-02-21 11:48:12,527] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-35 in 1 milliseconds for epoch 0, of which 1 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:50:08 kafka | [2024-02-21 11:48:12,528] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-43 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:50:08 kafka | [2024-02-21 11:48:12,528] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-5 in 2 milliseconds for epoch 0, of which 2 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:50:08 kafka | [2024-02-21 11:48:12,528] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 13 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:50:08 kafka | [2024-02-21 11:48:12,528] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-20 in 2 milliseconds for epoch 0, of which 2 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:50:08 kafka | [2024-02-21 11:48:12,528] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-13 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:50:08 kafka | [2024-02-21 11:48:12,528] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-27 in 2 milliseconds for epoch 0, of which 2 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:50:08 kafka | [2024-02-21 11:48:12,528] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 28 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 11:50:08 kafka | [2024-02-21 11:48:12,528] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-42 in 1 milliseconds for epoch 0, of which 1 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:50:08 kafka | [2024-02-21 11:48:12,528] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-28 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 11:50:08 kafka | [2024-02-21 11:48:12,528] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-12 in 1 milliseconds for epoch 0, of which 1 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:50:08 kafka | [2024-02-21 11:48:12,528] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-21 in 1 milliseconds for epoch 0, of which 1 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:50:08 kafka | [2024-02-21 11:48:12,528] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-36 in 1 milliseconds for epoch 0, of which 1 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:50:08 kafka | [2024-02-21 11:48:12,528] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-6 in 1 milliseconds for epoch 0, of which 1 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:50:08 kafka | [2024-02-21 11:48:12,528] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-43 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:50:08 kafka | [2024-02-21 11:48:12,528] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-13 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:50:08 kafka | [2024-02-21 11:48:12,529] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-28 in 1 milliseconds for epoch 0, of which 1 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 11:50:08 kafka | [2024-02-21 11:48:12,530] INFO [Broker id=1] Finished LeaderAndIsr request in 676ms correlationId 1 from controller 1 for 51 partitions (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,533] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=Kd_bSyAnRqaVsDMHCxs5MA, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=13, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=46, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=9, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=42, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=21, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=17, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=30, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=26, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=5, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=38, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=1, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=34, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=16, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=45, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=12, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=41, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=24, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=20, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=49, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=29, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=25, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=8, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=37, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=4, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=33, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=15, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=48, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=11, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=44, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=23, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=19, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=32, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=28, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=7, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=40, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=3, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=36, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=47, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=14, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=43, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=10, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=22, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=18, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=31, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=27, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=39, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=6, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=35, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=2, errorCode=0)]), LeaderAndIsrTopicError(topicId=k7zjIZnaTc-jGITsmuEMwA, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 1 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,539] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition policy-pdp-pap-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,539] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-13 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,539] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-46 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,539] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-9 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,539] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-42 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,539] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-21 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,539] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-17 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,539] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-30 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,539] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-26 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,539] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-5 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,539] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-38 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,539] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-1 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,539] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-34 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,539] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-16 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,539] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-45 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,539] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-12 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,539] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-41 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,539] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-24 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,539] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-20 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,539] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-49 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,539] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,539] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-29 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,539] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-25 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,539] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-8 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,540] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-37 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,540] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-4 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,540] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-33 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:50:08 grafana | logger=local.finder t=2024-02-21T11:47:46.483986288Z level=warn msg="Skipping finding plugins as directory does not exist" path=/usr/share/grafana/plugins-bundled 11:50:08 grafana | logger=plugin.store t=2024-02-21T11:47:46.484053609Z level=info msg="Plugins loaded" count=55 duration=41.510219ms 11:50:08 grafana | logger=query_data t=2024-02-21T11:47:46.486396633Z level=info msg="Query Service initialization" 11:50:08 grafana | logger=live.push_http t=2024-02-21T11:47:46.489222322Z level=info msg="Live Push Gateway initialization" 11:50:08 grafana | logger=ngalert.migration t=2024-02-21T11:47:46.495436506Z level=info msg=Starting 11:50:08 grafana | logger=ngalert.migration orgID=1 t=2024-02-21T11:47:46.496242544Z level=info msg="Migrating alerts for organisation" 11:50:08 grafana | logger=ngalert.migration orgID=1 t=2024-02-21T11:47:46.496997643Z level=info msg="Alerts found to migrate" alerts=0 11:50:08 grafana | logger=ngalert.migration CurrentType=Legacy DesiredType=UnifiedAlerting CleanOnDowngrade=false CleanOnUpgrade=false t=2024-02-21T11:47:46.498917872Z level=info msg="Completed legacy migration" 11:50:08 grafana | logger=infra.usagestats.collector t=2024-02-21T11:47:46.531464788Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2 11:50:08 grafana | logger=provisioning.datasources t=2024-02-21T11:47:46.534164126Z level=info msg="inserting datasource from configuration" name=PolicyPrometheus uid=dkSf71fnz 11:50:08 grafana | logger=provisioning.alerting t=2024-02-21T11:47:46.729303471Z level=info msg="starting to provision alerting" 11:50:08 grafana | logger=provisioning.alerting t=2024-02-21T11:47:46.729333671Z level=info msg="finished to provision alerting" 11:50:08 grafana | logger=ngalert.state.manager t=2024-02-21T11:47:46.729609254Z level=info msg="Warming state cache for startup" 11:50:08 grafana | logger=ngalert.state.manager t=2024-02-21T11:47:46.730123719Z level=info msg="State cache has been initialized" states=0 duration=514.115µs 11:50:08 grafana | logger=ngalert.scheduler t=2024-02-21T11:47:46.730162369Z level=info msg="Starting scheduler" tickInterval=10s 11:50:08 grafana | logger=ticker t=2024-02-21T11:47:46.730335241Z level=info msg=starting first_tick=2024-02-21T11:47:50Z 11:50:08 grafana | logger=ngalert.multiorg.alertmanager t=2024-02-21T11:47:46.730428972Z level=info msg="Starting MultiOrg Alertmanager" 11:50:08 grafana | logger=grafanaStorageLogger t=2024-02-21T11:47:46.730536833Z level=info msg="Storage starting" 11:50:08 grafana | logger=http.server t=2024-02-21T11:47:46.733824387Z level=info msg="HTTP Server Listen" address=[::]:3000 protocol=http subUrl= socket= 11:50:08 grafana | logger=grafana-apiserver t=2024-02-21T11:47:46.752297478Z level=info msg="Authentication is disabled" 11:50:08 grafana | logger=grafana-apiserver t=2024-02-21T11:47:46.75733375Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager" 11:50:08 grafana | logger=sqlstore.transactions t=2024-02-21T11:47:46.823631044Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked" 11:50:08 grafana | logger=plugins.update.checker t=2024-02-21T11:47:46.834719968Z level=info msg="Update check succeeded" duration=103.197025ms 11:50:08 grafana | logger=grafana.update.checker t=2024-02-21T11:47:46.865186293Z level=info msg="Update check succeeded" duration=131.74346ms 11:50:08 grafana | logger=sqlstore.transactions t=2024-02-21T11:47:46.97829114Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked" 11:50:08 grafana | logger=sqlstore.transactions t=2024-02-21T11:47:46.989692069Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=1 code="database is locked" 11:50:08 grafana | logger=sqlstore.transactions t=2024-02-21T11:47:47.001241198Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=2 code="database is locked" 11:50:08 grafana | logger=infra.usagestats t=2024-02-21T11:48:28.744264144Z level=info msg="Usage stats are ready to report" 11:50:08 kafka | [2024-02-21 11:48:12,540] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-15 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,540] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-48 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,540] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-11 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,540] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-44 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,540] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-23 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,540] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-19 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,540] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-32 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,540] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-28 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,540] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-7 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,540] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-40 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,540] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-3 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,540] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-36 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,540] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-47 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,540] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-14 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,540] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-43 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,540] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-10 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,540] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-22 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,540] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-18 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,540] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-31 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,540] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-27 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,540] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-39 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,540] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-6 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,540] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-35 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,541] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-2 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,541] INFO [Broker id=1] Add 51 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,542] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 2 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 11:50:08 kafka | [2024-02-21 11:48:12,602] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group deda7b7f-e78f-4ba0-9889-983a637c2ccd in Empty state. Created a new member id consumer-deda7b7f-e78f-4ba0-9889-983a637c2ccd-3-f84bcac6-ce3f-43f9-9186-5dad97824e9a and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 11:50:08 kafka | [2024-02-21 11:48:12,612] INFO [GroupCoordinator 1]: Preparing to rebalance group deda7b7f-e78f-4ba0-9889-983a637c2ccd in state PreparingRebalance with old generation 0 (__consumer_offsets-37) (reason: Adding new member consumer-deda7b7f-e78f-4ba0-9889-983a637c2ccd-3-f84bcac6-ce3f-43f9-9186-5dad97824e9a with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) 11:50:08 kafka | [2024-02-21 11:48:12,620] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group policy-pap in Empty state. Created a new member id consumer-policy-pap-4-4d170d86-8c1d-467f-973a-032b61efbcdf and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 11:50:08 kafka | [2024-02-21 11:48:12,623] INFO [GroupCoordinator 1]: Preparing to rebalance group policy-pap in state PreparingRebalance with old generation 0 (__consumer_offsets-24) (reason: Adding new member consumer-policy-pap-4-4d170d86-8c1d-467f-973a-032b61efbcdf with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) 11:50:08 kafka | [2024-02-21 11:48:13,142] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group 4c4f3bdc-0a77-42e6-89df-d332cf428198 in Empty state. Created a new member id consumer-4c4f3bdc-0a77-42e6-89df-d332cf428198-2-e4302eb2-03de-4260-b77f-25ccc4bdfb26 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 11:50:08 kafka | [2024-02-21 11:48:13,148] INFO [GroupCoordinator 1]: Preparing to rebalance group 4c4f3bdc-0a77-42e6-89df-d332cf428198 in state PreparingRebalance with old generation 0 (__consumer_offsets-23) (reason: Adding new member consumer-4c4f3bdc-0a77-42e6-89df-d332cf428198-2-e4302eb2-03de-4260-b77f-25ccc4bdfb26 with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) 11:50:08 kafka | [2024-02-21 11:48:15,624] INFO [GroupCoordinator 1]: Stabilized group deda7b7f-e78f-4ba0-9889-983a637c2ccd generation 1 (__consumer_offsets-37) with 1 members (kafka.coordinator.group.GroupCoordinator) 11:50:08 kafka | [2024-02-21 11:48:15,637] INFO [GroupCoordinator 1]: Stabilized group policy-pap generation 1 (__consumer_offsets-24) with 1 members (kafka.coordinator.group.GroupCoordinator) 11:50:08 kafka | [2024-02-21 11:48:15,673] INFO [GroupCoordinator 1]: Assignment received from leader consumer-deda7b7f-e78f-4ba0-9889-983a637c2ccd-3-f84bcac6-ce3f-43f9-9186-5dad97824e9a for group deda7b7f-e78f-4ba0-9889-983a637c2ccd for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 11:50:08 kafka | [2024-02-21 11:48:15,674] INFO [GroupCoordinator 1]: Assignment received from leader consumer-policy-pap-4-4d170d86-8c1d-467f-973a-032b61efbcdf for group policy-pap for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 11:50:08 kafka | [2024-02-21 11:48:16,149] INFO [GroupCoordinator 1]: Stabilized group 4c4f3bdc-0a77-42e6-89df-d332cf428198 generation 1 (__consumer_offsets-23) with 1 members (kafka.coordinator.group.GroupCoordinator) 11:50:08 kafka | [2024-02-21 11:48:16,162] INFO [GroupCoordinator 1]: Assignment received from leader consumer-4c4f3bdc-0a77-42e6-89df-d332cf428198-2-e4302eb2-03de-4260-b77f-25ccc4bdfb26 for group 4c4f3bdc-0a77-42e6-89df-d332cf428198 for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 11:50:08 ++ echo 'Tearing down containers...' 11:50:08 Tearing down containers... 11:50:08 ++ docker-compose down -v --remove-orphans 11:50:08 Stopping grafana ... 11:50:08 Stopping policy-apex-pdp ... 11:50:08 Stopping policy-pap ... 11:50:08 Stopping policy-api ... 11:50:08 Stopping kafka ... 11:50:08 Stopping mariadb ... 11:50:08 Stopping compose_zookeeper_1 ... 11:50:08 Stopping prometheus ... 11:50:08 Stopping simulator ... 11:50:09 Stopping grafana ... done 11:50:09 Stopping prometheus ... done 11:50:19 Stopping policy-apex-pdp ... done 11:50:30 Stopping simulator ... done 11:50:30 Stopping policy-pap ... done 11:50:31 Stopping mariadb ... done 11:50:31 Stopping kafka ... done 11:50:31 Stopping compose_zookeeper_1 ... done 11:50:40 Stopping policy-api ... done 11:50:40 Removing grafana ... 11:50:40 Removing policy-apex-pdp ... 11:50:40 Removing policy-pap ... 11:50:40 Removing policy-api ... 11:50:40 Removing kafka ... 11:50:40 Removing policy-db-migrator ... 11:50:40 Removing mariadb ... 11:50:40 Removing compose_zookeeper_1 ... 11:50:40 Removing prometheus ... 11:50:40 Removing simulator ... 11:50:41 Removing compose_zookeeper_1 ... done 11:50:41 Removing prometheus ... done 11:50:41 Removing policy-apex-pdp ... done 11:50:41 Removing policy-pap ... done 11:50:41 Removing grafana ... done 11:50:41 Removing simulator ... done 11:50:41 Removing policy-api ... done 11:50:41 Removing policy-db-migrator ... done 11:50:41 Removing mariadb ... done 11:50:41 Removing kafka ... done 11:50:41 Removing network compose_default 11:50:41 ++ cd /w/workspace/policy-pap-master-project-csit-pap 11:50:41 + load_set 11:50:41 + _setopts=hxB 11:50:41 ++ echo braceexpand:hashall:interactive-comments:xtrace 11:50:41 ++ tr : ' ' 11:50:41 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 11:50:41 + set +o braceexpand 11:50:41 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 11:50:41 + set +o hashall 11:50:41 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 11:50:41 + set +o interactive-comments 11:50:41 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 11:50:41 + set +o xtrace 11:50:41 ++ echo hxB 11:50:41 ++ sed 's/./& /g' 11:50:41 + for i in $(echo "$_setopts" | sed 's/./& /g') 11:50:41 + set +h 11:50:41 + for i in $(echo "$_setopts" | sed 's/./& /g') 11:50:41 + set +x 11:50:41 + [[ -n /tmp/tmp.jgsfe5r9Wz ]] 11:50:41 + rsync -av /tmp/tmp.jgsfe5r9Wz/ /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap 11:50:41 sending incremental file list 11:50:41 ./ 11:50:41 log.html 11:50:41 output.xml 11:50:41 report.html 11:50:41 testplan.txt 11:50:41 11:50:41 sent 910,071 bytes received 95 bytes 1,820,332.00 bytes/sec 11:50:41 total size is 909,525 speedup is 1.00 11:50:41 + rm -rf /w/workspace/policy-pap-master-project-csit-pap/models 11:50:41 + exit 0 11:50:41 $ ssh-agent -k 11:50:41 unset SSH_AUTH_SOCK; 11:50:41 unset SSH_AGENT_PID; 11:50:41 echo Agent pid 2116 killed; 11:50:41 [ssh-agent] Stopped. 11:50:41 Robot results publisher started... 11:50:41 INFO: Checking test criticality is deprecated and will be dropped in a future release! 11:50:41 -Parsing output xml: 11:50:41 Done! 11:50:41 WARNING! Could not find file: **/log.html 11:50:41 WARNING! Could not find file: **/report.html 11:50:41 -Copying log files to build dir: 11:50:42 Done! 11:50:42 -Assigning results to build: 11:50:42 Done! 11:50:42 -Checking thresholds: 11:50:42 Done! 11:50:42 Done publishing Robot results. 11:50:42 [PostBuildScript] - [INFO] Executing post build scripts. 11:50:42 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins5426765438142892240.sh 11:50:42 ---> sysstat.sh 11:50:42 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins15046977590738273418.sh 11:50:42 ---> package-listing.sh 11:50:42 ++ facter osfamily 11:50:42 ++ tr '[:upper:]' '[:lower:]' 11:50:42 + OS_FAMILY=debian 11:50:42 + workspace=/w/workspace/policy-pap-master-project-csit-pap 11:50:42 + START_PACKAGES=/tmp/packages_start.txt 11:50:42 + END_PACKAGES=/tmp/packages_end.txt 11:50:42 + DIFF_PACKAGES=/tmp/packages_diff.txt 11:50:42 + PACKAGES=/tmp/packages_start.txt 11:50:42 + '[' /w/workspace/policy-pap-master-project-csit-pap ']' 11:50:42 + PACKAGES=/tmp/packages_end.txt 11:50:42 + case "${OS_FAMILY}" in 11:50:42 + dpkg -l 11:50:42 + grep '^ii' 11:50:42 + '[' -f /tmp/packages_start.txt ']' 11:50:42 + '[' -f /tmp/packages_end.txt ']' 11:50:42 + diff /tmp/packages_start.txt /tmp/packages_end.txt 11:50:42 + '[' /w/workspace/policy-pap-master-project-csit-pap ']' 11:50:42 + mkdir -p /w/workspace/policy-pap-master-project-csit-pap/archives/ 11:50:42 + cp -f /tmp/packages_diff.txt /tmp/packages_end.txt /tmp/packages_start.txt /w/workspace/policy-pap-master-project-csit-pap/archives/ 11:50:42 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins15195289388579172221.sh 11:50:42 ---> capture-instance-metadata.sh 11:50:42 Setup pyenv: 11:50:42 system 11:50:42 3.8.13 11:50:42 3.9.13 11:50:42 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) 11:50:43 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-VvqR from file:/tmp/.os_lf_venv 11:50:44 lf-activate-venv(): INFO: Installing: lftools 11:50:54 lf-activate-venv(): INFO: Adding /tmp/venv-VvqR/bin to PATH 11:50:54 INFO: Running in OpenStack, capturing instance metadata 11:50:55 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins9683831905139847935.sh 11:50:55 provisioning config files... 11:50:55 copy managed file [jenkins-log-archives-settings] to file:/w/workspace/policy-pap-master-project-csit-pap@tmp/config15897557421479221781tmp 11:50:55 Regular expression run condition: Expression=[^.*logs-s3.*], Label=[] 11:50:55 Run condition [Regular expression match] preventing perform for step [Provide Configuration files] 11:50:55 [EnvInject] - Injecting environment variables from a build step. 11:50:55 [EnvInject] - Injecting as environment variables the properties content 11:50:55 SERVER_ID=logs 11:50:55 11:50:55 [EnvInject] - Variables injected successfully. 11:50:55 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins7435347703648092475.sh 11:50:55 ---> create-netrc.sh 11:50:55 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins11433524173293637422.sh 11:50:55 ---> python-tools-install.sh 11:50:55 Setup pyenv: 11:50:55 system 11:50:55 3.8.13 11:50:55 3.9.13 11:50:55 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) 11:50:55 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-VvqR from file:/tmp/.os_lf_venv 11:50:57 lf-activate-venv(): INFO: Installing: lftools 11:51:05 lf-activate-venv(): INFO: Adding /tmp/venv-VvqR/bin to PATH 11:51:05 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins7272411505631675756.sh 11:51:05 ---> sudo-logs.sh 11:51:05 Archiving 'sudo' log.. 11:51:05 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins14415456646274055677.sh 11:51:05 ---> job-cost.sh 11:51:05 Setup pyenv: 11:51:05 system 11:51:05 3.8.13 11:51:05 3.9.13 11:51:05 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) 11:51:05 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-VvqR from file:/tmp/.os_lf_venv 11:51:07 lf-activate-venv(): INFO: Installing: zipp==1.1.0 python-openstackclient urllib3~=1.26.15 11:51:12 lf-activate-venv(): INFO: Adding /tmp/venv-VvqR/bin to PATH 11:51:12 INFO: No Stack... 11:51:12 INFO: Retrieving Pricing Info for: v3-standard-8 11:51:13 INFO: Archiving Costs 11:51:13 [policy-pap-master-project-csit-pap] $ /bin/bash -l /tmp/jenkins10971026089822837377.sh 11:51:13 ---> logs-deploy.sh 11:51:13 Setup pyenv: 11:51:13 system 11:51:13 3.8.13 11:51:13 3.9.13 11:51:13 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) 11:51:13 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-VvqR from file:/tmp/.os_lf_venv 11:51:14 lf-activate-venv(): INFO: Installing: lftools 11:51:22 lf-activate-venv(): INFO: Adding /tmp/venv-VvqR/bin to PATH 11:51:22 INFO: Nexus URL https://nexus.onap.org path production/vex-yul-ecomp-jenkins-1/policy-pap-master-project-csit-pap/1585 11:51:22 INFO: archiving workspace using pattern(s): -p **/target/surefire-reports/*-output.txt 11:51:24 Archives upload complete. 11:51:24 INFO: archiving logs to Nexus 11:51:25 ---> uname -a: 11:51:25 Linux prd-ubuntu1804-docker-8c-8g-7307 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux 11:51:25 11:51:25 11:51:25 ---> lscpu: 11:51:25 Architecture: x86_64 11:51:25 CPU op-mode(s): 32-bit, 64-bit 11:51:25 Byte Order: Little Endian 11:51:25 CPU(s): 8 11:51:25 On-line CPU(s) list: 0-7 11:51:25 Thread(s) per core: 1 11:51:25 Core(s) per socket: 1 11:51:25 Socket(s): 8 11:51:25 NUMA node(s): 1 11:51:25 Vendor ID: AuthenticAMD 11:51:25 CPU family: 23 11:51:25 Model: 49 11:51:25 Model name: AMD EPYC-Rome Processor 11:51:25 Stepping: 0 11:51:25 CPU MHz: 2799.998 11:51:25 BogoMIPS: 5599.99 11:51:25 Virtualization: AMD-V 11:51:25 Hypervisor vendor: KVM 11:51:25 Virtualization type: full 11:51:25 L1d cache: 32K 11:51:25 L1i cache: 32K 11:51:25 L2 cache: 512K 11:51:25 L3 cache: 16384K 11:51:25 NUMA node0 CPU(s): 0-7 11:51:25 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl xtopology cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr arat npt nrip_save umip rdpid arch_capabilities 11:51:25 11:51:25 11:51:25 ---> nproc: 11:51:25 8 11:51:25 11:51:25 11:51:25 ---> df -h: 11:51:25 Filesystem Size Used Avail Use% Mounted on 11:51:25 udev 16G 0 16G 0% /dev 11:51:25 tmpfs 3.2G 708K 3.2G 1% /run 11:51:25 /dev/vda1 155G 14G 142G 9% / 11:51:25 tmpfs 16G 0 16G 0% /dev/shm 11:51:25 tmpfs 5.0M 0 5.0M 0% /run/lock 11:51:25 tmpfs 16G 0 16G 0% /sys/fs/cgroup 11:51:25 /dev/vda15 105M 4.4M 100M 5% /boot/efi 11:51:25 tmpfs 3.2G 0 3.2G 0% /run/user/1001 11:51:25 11:51:25 11:51:25 ---> free -m: 11:51:25 total used free shared buff/cache available 11:51:25 Mem: 32167 848 25321 0 5997 30863 11:51:25 Swap: 1023 0 1023 11:51:25 11:51:25 11:51:25 ---> ip addr: 11:51:25 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 11:51:25 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 11:51:25 inet 127.0.0.1/8 scope host lo 11:51:25 valid_lft forever preferred_lft forever 11:51:25 inet6 ::1/128 scope host 11:51:25 valid_lft forever preferred_lft forever 11:51:25 2: ens3: mtu 1458 qdisc mq state UP group default qlen 1000 11:51:25 link/ether fa:16:3e:b7:e9:a5 brd ff:ff:ff:ff:ff:ff 11:51:25 inet 10.30.106.115/23 brd 10.30.107.255 scope global dynamic ens3 11:51:25 valid_lft 85917sec preferred_lft 85917sec 11:51:25 inet6 fe80::f816:3eff:feb7:e9a5/64 scope link 11:51:25 valid_lft forever preferred_lft forever 11:51:25 3: docker0: mtu 1500 qdisc noqueue state DOWN group default 11:51:25 link/ether 02:42:f6:58:20:4e brd ff:ff:ff:ff:ff:ff 11:51:25 inet 10.250.0.254/24 brd 10.250.0.255 scope global docker0 11:51:25 valid_lft forever preferred_lft forever 11:51:25 11:51:25 11:51:25 ---> sar -b -r -n DEV: 11:51:25 Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-7307) 02/21/24 _x86_64_ (8 CPU) 11:51:25 11:51:25 11:43:25 LINUX RESTART (8 CPU) 11:51:25 11:51:25 11:44:01 tps rtps wtps bread/s bwrtn/s 11:51:25 11:45:01 106.41 42.55 63.86 1928.16 17119.49 11:51:25 11:46:01 118.43 19.98 98.45 2447.98 23127.22 11:51:25 11:47:01 162.50 3.10 159.40 306.83 81348.75 11:51:25 11:48:01 391.23 12.33 378.90 785.00 73101.62 11:51:25 11:49:01 22.68 0.37 22.31 32.79 10904.40 11:51:25 11:50:01 10.10 0.00 10.10 0.00 10568.36 11:51:25 11:51:01 66.97 1.37 65.61 111.05 13641.99 11:51:25 Average: 125.47 11.39 114.09 801.72 32830.28 11:51:25 11:51:25 11:44:01 kbmemfree kbavail kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty 11:51:25 11:45:01 30168604 31756824 2770616 8.41 68188 1830744 1372028 4.04 811628 1666356 153992 11:51:25 11:46:01 29785128 31748840 3154092 9.58 89540 2167456 1373668 4.04 871764 1953760 146684 11:51:25 11:47:01 27070428 31669908 5868792 17.82 129704 4651032 1413416 4.16 1007908 4387152 575524 11:51:25 11:48:01 24174476 29995868 8764744 26.61 153544 5785352 8523332 25.08 2831408 5340220 464 11:51:25 11:49:01 23757620 29585600 9181600 27.87 155224 5787756 8850480 26.04 3281728 5297760 624 11:51:25 11:50:01 23751344 29580112 9187876 27.89 155332 5788308 8866028 26.09 3286652 5298304 236 11:51:25 11:51:01 25919040 31590164 7020180 21.31 158216 5646452 1565324 4.61 1325056 5158804 22868 11:51:25 Average: 26375234 30846759 6563986 19.93 129964 4522443 4566325 13.44 1916592 4157479 128627 11:51:25 11:51:25 11:44:01 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil 11:51:25 11:45:01 lo 1.47 1.47 0.16 0.16 0.00 0.00 0.00 0.00 11:51:25 11:45:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 11:51:25 11:45:01 ens3 211.18 120.79 1140.81 36.30 0.00 0.00 0.00 0.00 11:51:25 11:46:01 lo 4.13 4.13 0.40 0.40 0.00 0.00 0.00 0.00 11:51:25 11:46:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 11:51:25 11:46:01 ens3 70.36 52.90 907.51 11.49 0.00 0.00 0.00 0.00 11:51:25 11:47:01 lo 6.53 6.53 0.65 0.65 0.00 0.00 0.00 0.00 11:51:25 11:47:01 br-2146f22987d6 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 11:51:25 11:47:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 11:51:25 11:47:01 ens3 810.50 390.10 19076.30 29.21 0.00 0.00 0.00 0.00 11:51:25 11:48:01 lo 3.40 3.40 0.30 0.30 0.00 0.00 0.00 0.00 11:51:25 11:48:01 vethc798a56 0.00 0.30 0.00 0.02 0.00 0.00 0.00 0.00 11:51:25 11:48:01 br-2146f22987d6 0.70 0.60 0.05 0.30 0.00 0.00 0.00 0.00 11:51:25 11:48:01 veth5ac509e 0.45 0.62 0.05 0.30 0.00 0.00 0.00 0.00 11:51:25 11:49:01 lo 5.60 5.60 3.58 3.58 0.00 0.00 0.00 0.00 11:51:25 11:49:01 vethc798a56 0.00 0.08 0.00 0.00 0.00 0.00 0.00 0.00 11:51:25 11:49:01 br-2146f22987d6 1.98 2.32 1.79 1.73 0.00 0.00 0.00 0.00 11:51:25 11:49:01 veth5ac509e 0.13 0.22 0.01 0.01 0.00 0.00 0.00 0.00 11:51:25 11:50:01 lo 4.72 4.72 0.30 0.30 0.00 0.00 0.00 0.00 11:51:25 11:50:01 vethc798a56 0.00 0.02 0.00 0.00 0.00 0.00 0.00 0.00 11:51:25 11:50:01 br-2146f22987d6 0.67 0.53 0.06 0.04 0.00 0.00 0.00 0.00 11:51:25 11:50:01 veth5ac509e 0.27 0.13 0.02 0.01 0.00 0.00 0.00 0.00 11:51:25 11:51:01 lo 4.73 4.73 0.44 0.44 0.00 0.00 0.00 0.00 11:51:25 11:51:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 11:51:25 11:51:01 ens3 1636.31 909.82 32751.72 148.17 0.00 0.00 0.00 0.00 11:51:25 Average: lo 4.37 4.37 0.83 0.83 0.00 0.00 0.00 0.00 11:51:25 Average: docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 11:51:25 Average: ens3 209.32 111.95 4607.86 17.65 0.00 0.00 0.00 0.00 11:51:25 11:51:25 11:51:25 ---> sar -P ALL: 11:51:25 Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-7307) 02/21/24 _x86_64_ (8 CPU) 11:51:25 11:51:25 11:43:25 LINUX RESTART (8 CPU) 11:51:25 11:51:25 11:44:01 CPU %user %nice %system %iowait %steal %idle 11:51:25 11:45:01 all 10.38 0.00 0.88 2.29 0.04 86.41 11:51:25 11:45:01 0 19.57 0.00 1.24 0.60 0.03 78.56 11:51:25 11:45:01 1 6.60 0.00 0.52 3.12 0.03 89.72 11:51:25 11:45:01 2 2.27 0.00 0.38 9.69 0.03 87.62 11:51:25 11:45:01 3 6.13 0.00 1.09 0.43 0.03 92.31 11:51:25 11:45:01 4 12.20 0.00 0.79 0.79 0.07 86.15 11:51:25 11:45:01 5 6.06 0.00 0.78 0.27 0.03 92.86 11:51:25 11:45:01 6 4.03 0.00 0.70 0.43 0.02 94.82 11:51:25 11:45:01 7 26.13 0.00 1.58 3.02 0.05 69.22 11:51:25 11:46:01 all 10.11 0.00 0.87 2.51 0.04 86.47 11:51:25 11:46:01 0 26.35 0.00 1.40 1.97 0.05 70.22 11:51:25 11:46:01 1 11.34 0.00 1.17 0.50 0.02 86.98 11:51:25 11:46:01 2 3.68 0.00 0.74 9.86 0.02 85.70 11:51:25 11:46:01 3 20.42 0.00 1.39 2.41 0.07 75.71 11:51:25 11:46:01 4 2.56 0.00 0.44 0.32 0.03 96.65 11:51:25 11:46:01 5 4.66 0.00 0.35 0.18 0.02 94.79 11:51:25 11:46:01 6 5.22 0.00 0.58 0.00 0.02 94.18 11:51:25 11:46:01 7 6.65 0.00 0.87 4.83 0.05 87.60 11:51:25 11:47:01 all 9.46 0.00 4.11 9.79 0.05 76.59 11:51:25 11:47:01 0 10.68 0.00 2.89 0.17 0.05 86.21 11:51:25 11:47:01 1 11.07 0.00 4.38 0.25 0.05 84.24 11:51:25 11:47:01 2 7.92 0.00 4.27 28.05 0.07 59.70 11:51:25 11:47:01 3 9.35 0.00 4.30 6.63 0.05 79.66 11:51:25 11:47:01 4 7.90 0.00 3.94 5.93 0.03 82.20 11:51:25 11:47:01 5 9.47 0.00 4.29 0.68 0.05 85.51 11:51:25 11:47:01 6 8.39 0.00 5.10 13.81 0.07 72.64 11:51:25 11:47:01 7 10.89 0.00 3.66 22.88 0.05 62.52 11:51:25 11:48:01 all 20.09 0.00 4.30 7.00 0.08 68.53 11:51:25 11:48:01 0 20.07 0.00 4.22 2.10 0.07 73.55 11:51:25 11:48:01 1 25.79 0.00 4.50 0.78 0.07 68.87 11:51:25 11:48:01 2 19.07 0.00 4.57 13.27 0.10 62.99 11:51:25 11:48:01 3 20.93 0.00 4.37 2.07 0.08 72.55 11:51:25 11:48:01 4 22.56 0.00 4.28 12.17 0.10 60.89 11:51:25 11:48:01 5 13.80 0.00 3.32 20.49 0.08 62.31 11:51:25 11:48:01 6 16.90 0.00 4.65 2.89 0.07 75.50 11:51:25 11:48:01 7 21.61 0.00 4.52 2.28 0.08 71.50 11:51:25 11:49:01 all 15.64 0.00 1.49 0.64 0.06 82.18 11:51:25 11:49:01 0 15.01 0.00 1.42 0.03 0.07 83.47 11:51:25 11:49:01 1 17.89 0.00 1.62 0.00 0.07 80.43 11:51:25 11:49:01 2 20.89 0.00 1.87 4.77 0.05 72.42 11:51:25 11:49:01 3 15.01 0.00 1.55 0.07 0.03 83.34 11:51:25 11:49:01 4 12.27 0.00 1.21 0.02 0.05 86.46 11:51:25 11:49:01 5 16.21 0.00 1.47 0.02 0.05 82.25 11:51:25 11:49:01 6 14.21 0.00 1.37 0.00 0.08 84.34 11:51:25 11:49:01 7 13.61 0.00 1.44 0.17 0.05 84.74 11:51:25 11:50:01 all 1.02 0.00 0.15 0.80 0.03 97.99 11:51:25 11:50:01 0 1.14 0.00 0.17 0.00 0.03 98.66 11:51:25 11:50:01 1 1.90 0.00 0.20 0.00 0.03 97.87 11:51:25 11:50:01 2 1.10 0.00 0.13 6.05 0.03 92.68 11:51:25 11:50:01 3 0.58 0.00 0.12 0.00 0.02 99.28 11:51:25 11:50:01 4 0.80 0.00 0.13 0.33 0.02 98.71 11:51:25 11:50:01 5 0.89 0.00 0.12 0.00 0.03 98.96 11:51:25 11:50:01 6 0.79 0.00 0.13 0.00 0.05 99.03 11:51:25 11:50:01 7 0.95 0.00 0.20 0.02 0.03 98.80 11:51:25 11:51:01 all 4.86 0.00 0.73 1.79 0.03 92.59 11:51:25 11:51:01 0 4.31 0.00 0.82 1.30 0.05 93.52 11:51:25 11:51:01 1 1.27 0.00 0.64 0.17 0.03 97.89 11:51:25 11:51:01 2 17.51 0.00 1.25 6.00 0.03 75.20 11:51:25 11:51:01 3 1.57 0.00 0.70 0.25 0.02 97.46 11:51:25 11:51:01 4 0.99 0.00 0.48 0.94 0.03 97.56 11:51:25 11:51:01 5 3.49 0.00 0.67 0.23 0.03 95.57 11:51:25 11:51:01 6 7.61 0.00 0.67 0.92 0.03 90.77 11:51:25 11:51:01 7 2.09 0.00 0.62 4.49 0.03 92.77 11:51:25 Average: all 10.21 0.00 1.78 3.53 0.05 84.43 11:51:25 Average: 0 13.88 0.00 1.73 0.88 0.05 83.46 11:51:25 Average: 1 10.81 0.00 1.85 0.69 0.04 86.61 11:51:25 Average: 2 10.34 0.00 1.88 11.06 0.05 76.67 11:51:25 Average: 3 10.56 0.00 1.93 1.69 0.04 85.79 11:51:25 Average: 4 8.45 0.00 1.60 2.91 0.05 86.99 11:51:25 Average: 5 7.78 0.00 1.56 3.11 0.04 87.50 11:51:25 Average: 6 8.15 0.00 1.88 2.56 0.05 87.36 11:51:25 Average: 7 11.70 0.00 1.83 5.35 0.05 81.06 11:51:25 11:51:25 11:51:25