23:10:58 Started by timer 23:10:58 Running as SYSTEM 23:10:58 [EnvInject] - Loading node environment variables. 23:10:58 Building remotely on prd-ubuntu1804-docker-8c-8g-22180 (ubuntu1804-docker-8c-8g) in workspace /w/workspace/policy-pap-master-project-csit-pap 23:10:58 [ssh-agent] Looking for ssh-agent implementation... 23:10:58 [ssh-agent] Exec ssh-agent (binary ssh-agent on a remote machine) 23:10:58 $ ssh-agent 23:10:58 SSH_AUTH_SOCK=/tmp/ssh-UQVHXaVCbgTh/agent.2121 23:10:58 SSH_AGENT_PID=2123 23:10:58 [ssh-agent] Started. 23:10:58 Running ssh-add (command line suppressed) 23:10:58 Identity added: /w/workspace/policy-pap-master-project-csit-pap@tmp/private_key_10697456099607887749.key (/w/workspace/policy-pap-master-project-csit-pap@tmp/private_key_10697456099607887749.key) 23:10:58 [ssh-agent] Using credentials onap-jobbuiler (Gerrit user) 23:10:58 The recommended git tool is: NONE 23:11:00 using credential onap-jenkins-ssh 23:11:00 Wiping out workspace first. 23:11:00 Cloning the remote Git repository 23:11:00 Cloning repository git://cloud.onap.org/mirror/policy/docker.git 23:11:00 > git init /w/workspace/policy-pap-master-project-csit-pap # timeout=10 23:11:00 Fetching upstream changes from git://cloud.onap.org/mirror/policy/docker.git 23:11:00 > git --version # timeout=10 23:11:00 > git --version # 'git version 2.17.1' 23:11:00 using GIT_SSH to set credentials Gerrit user 23:11:00 Verifying host key using manually-configured host key entries 23:11:00 > git fetch --tags --progress -- git://cloud.onap.org/mirror/policy/docker.git +refs/heads/*:refs/remotes/origin/* # timeout=30 23:11:01 > git config remote.origin.url git://cloud.onap.org/mirror/policy/docker.git # timeout=10 23:11:01 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10 23:11:01 Avoid second fetch 23:11:01 > git rev-parse refs/remotes/origin/master^{commit} # timeout=10 23:11:01 Checking out Revision c5936fb131831992ac8da40fb56599dfb0ae1b5e (refs/remotes/origin/master) 23:11:01 > git config core.sparsecheckout # timeout=10 23:11:01 > git checkout -f c5936fb131831992ac8da40fb56599dfb0ae1b5e # timeout=30 23:11:01 Commit message: "Disable drools pdp test in CSIT until drools is fixed" 23:11:01 > git rev-list --no-walk c5936fb131831992ac8da40fb56599dfb0ae1b5e # timeout=10 23:11:01 provisioning config files... 23:11:01 copy managed file [npmrc] to file:/home/jenkins/.npmrc 23:11:01 copy managed file [pipconf] to file:/home/jenkins/.config/pip/pip.conf 23:11:01 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins15205227534871770121.sh 23:11:01 ---> python-tools-install.sh 23:11:01 Setup pyenv: 23:11:01 * system (set by /opt/pyenv/version) 23:11:01 * 3.8.13 (set by /opt/pyenv/version) 23:11:01 * 3.9.13 (set by /opt/pyenv/version) 23:11:01 * 3.10.6 (set by /opt/pyenv/version) 23:11:06 lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-UZJF 23:11:06 lf-activate-venv(): INFO: Save venv in file: /tmp/.os_lf_venv 23:11:09 lf-activate-venv(): INFO: Installing: lftools 23:11:42 lf-activate-venv(): INFO: Adding /tmp/venv-UZJF/bin to PATH 23:11:42 Generating Requirements File 23:12:12 Python 3.10.6 23:12:12 pip 24.0 from /tmp/venv-UZJF/lib/python3.10/site-packages/pip (python 3.10) 23:12:12 appdirs==1.4.4 23:12:12 argcomplete==3.2.3 23:12:12 aspy.yaml==1.3.0 23:12:12 attrs==23.2.0 23:12:12 autopage==0.5.2 23:12:12 beautifulsoup4==4.12.3 23:12:12 boto3==1.34.82 23:12:12 botocore==1.34.82 23:12:12 bs4==0.0.2 23:12:12 cachetools==5.3.3 23:12:12 certifi==2024.2.2 23:12:12 cffi==1.16.0 23:12:12 cfgv==3.4.0 23:12:12 chardet==5.2.0 23:12:12 charset-normalizer==3.3.2 23:12:12 click==8.1.7 23:12:12 cliff==4.6.0 23:12:12 cmd2==2.4.3 23:12:12 cryptography==3.3.2 23:12:12 debtcollector==3.0.0 23:12:12 decorator==5.1.1 23:12:12 defusedxml==0.7.1 23:12:12 Deprecated==1.2.14 23:12:12 distlib==0.3.8 23:12:12 dnspython==2.6.1 23:12:12 docker==4.2.2 23:12:12 dogpile.cache==1.3.2 23:12:12 email_validator==2.1.1 23:12:12 filelock==3.13.4 23:12:12 future==1.0.0 23:12:12 gitdb==4.0.11 23:12:12 GitPython==3.1.43 23:12:12 google-auth==2.29.0 23:12:12 httplib2==0.22.0 23:12:12 identify==2.5.35 23:12:12 idna==3.6 23:12:12 importlib-resources==1.5.0 23:12:12 iso8601==2.1.0 23:12:12 Jinja2==3.1.3 23:12:12 jmespath==1.0.1 23:12:12 jsonpatch==1.33 23:12:12 jsonpointer==2.4 23:12:12 jsonschema==4.21.1 23:12:12 jsonschema-specifications==2023.12.1 23:12:12 keystoneauth1==5.6.0 23:12:12 kubernetes==29.0.0 23:12:12 lftools==0.37.10 23:12:12 lxml==5.2.1 23:12:12 MarkupSafe==2.1.5 23:12:12 msgpack==1.0.8 23:12:12 multi_key_dict==2.0.3 23:12:12 munch==4.0.0 23:12:12 netaddr==1.2.1 23:12:12 netifaces==0.11.0 23:12:12 niet==1.4.2 23:12:12 nodeenv==1.8.0 23:12:12 oauth2client==4.1.3 23:12:12 oauthlib==3.2.2 23:12:12 openstacksdk==3.0.0 23:12:12 os-client-config==2.1.0 23:12:12 os-service-types==1.7.0 23:12:12 osc-lib==3.0.1 23:12:12 oslo.config==9.4.0 23:12:12 oslo.context==5.5.0 23:12:12 oslo.i18n==6.3.0 23:12:12 oslo.log==5.5.1 23:12:12 oslo.serialization==5.4.0 23:12:12 oslo.utils==7.1.0 23:12:12 packaging==24.0 23:12:12 pbr==6.0.0 23:12:12 platformdirs==4.2.0 23:12:12 prettytable==3.10.0 23:12:12 pyasn1==0.6.0 23:12:12 pyasn1_modules==0.4.0 23:12:12 pycparser==2.22 23:12:12 pygerrit2==2.0.15 23:12:12 PyGithub==2.3.0 23:12:12 pyinotify==0.9.6 23:12:12 PyJWT==2.8.0 23:12:12 PyNaCl==1.5.0 23:12:12 pyparsing==2.4.7 23:12:12 pyperclip==1.8.2 23:12:12 pyrsistent==0.20.0 23:12:12 python-cinderclient==9.5.0 23:12:12 python-dateutil==2.9.0.post0 23:12:12 python-heatclient==3.5.0 23:12:12 python-jenkins==1.8.2 23:12:12 python-keystoneclient==5.4.0 23:12:12 python-magnumclient==4.4.0 23:12:12 python-novaclient==18.6.0 23:12:12 python-openstackclient==6.6.0 23:12:12 python-swiftclient==4.5.0 23:12:12 PyYAML==6.0.1 23:12:12 referencing==0.34.0 23:12:12 requests==2.31.0 23:12:12 requests-oauthlib==2.0.0 23:12:12 requestsexceptions==1.4.0 23:12:12 rfc3986==2.0.0 23:12:12 rpds-py==0.18.0 23:12:12 rsa==4.9 23:12:12 ruamel.yaml==0.18.6 23:12:12 ruamel.yaml.clib==0.2.8 23:12:12 s3transfer==0.10.1 23:12:12 simplejson==3.19.2 23:12:12 six==1.16.0 23:12:12 smmap==5.0.1 23:12:12 soupsieve==2.5 23:12:12 stevedore==5.2.0 23:12:12 tabulate==0.9.0 23:12:12 toml==0.10.2 23:12:12 tomlkit==0.12.4 23:12:12 tqdm==4.66.2 23:12:12 typing_extensions==4.11.0 23:12:12 tzdata==2024.1 23:12:12 urllib3==1.26.18 23:12:12 virtualenv==20.25.1 23:12:12 wcwidth==0.2.13 23:12:12 websocket-client==1.7.0 23:12:12 wrapt==1.16.0 23:12:12 xdg==6.0.0 23:12:12 xmltodict==0.13.0 23:12:12 yq==3.2.3 23:12:12 [EnvInject] - Injecting environment variables from a build step. 23:12:12 [EnvInject] - Injecting as environment variables the properties content 23:12:12 SET_JDK_VERSION=openjdk17 23:12:12 GIT_URL="git://cloud.onap.org/mirror" 23:12:12 23:12:12 [EnvInject] - Variables injected successfully. 23:12:12 [policy-pap-master-project-csit-pap] $ /bin/sh /tmp/jenkins13498086042221129545.sh 23:12:12 ---> update-java-alternatives.sh 23:12:12 ---> Updating Java version 23:12:13 ---> Ubuntu/Debian system detected 23:12:13 update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/java to provide /usr/bin/java (java) in manual mode 23:12:13 update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/javac to provide /usr/bin/javac (javac) in manual mode 23:12:13 update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64 to provide /usr/lib/jvm/java-openjdk (java_sdk_openjdk) in manual mode 23:12:13 openjdk version "17.0.4" 2022-07-19 23:12:13 OpenJDK Runtime Environment (build 17.0.4+8-Ubuntu-118.04) 23:12:13 OpenJDK 64-Bit Server VM (build 17.0.4+8-Ubuntu-118.04, mixed mode, sharing) 23:12:13 JAVA_HOME=/usr/lib/jvm/java-17-openjdk-amd64 23:12:13 [EnvInject] - Injecting environment variables from a build step. 23:12:13 [EnvInject] - Injecting as environment variables the properties file path '/tmp/java.env' 23:12:13 [EnvInject] - Variables injected successfully. 23:12:13 [policy-pap-master-project-csit-pap] $ /bin/sh -xe /tmp/jenkins6387499596892060164.sh 23:12:13 + /w/workspace/policy-pap-master-project-csit-pap/csit/run-project-csit.sh pap 23:12:13 + set +u 23:12:13 + save_set 23:12:13 + RUN_CSIT_SAVE_SET=ehxB 23:12:13 + RUN_CSIT_SHELLOPTS=braceexpand:errexit:hashall:interactive-comments:pipefail:xtrace 23:12:13 + '[' 1 -eq 0 ']' 23:12:13 + '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' 23:12:13 + export PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 23:12:13 + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 23:12:13 + export SCRIPTS=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts 23:12:13 + SCRIPTS=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts 23:12:13 + export ROBOT_VARIABLES= 23:12:13 + ROBOT_VARIABLES= 23:12:13 + export PROJECT=pap 23:12:13 + PROJECT=pap 23:12:13 + cd /w/workspace/policy-pap-master-project-csit-pap 23:12:13 + rm -rf /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap 23:12:13 + mkdir -p /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap 23:12:13 + source_safely /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/prepare-robot-env.sh 23:12:13 + '[' -z /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/prepare-robot-env.sh ']' 23:12:13 + relax_set 23:12:13 + set +e 23:12:13 + set +o pipefail 23:12:13 + . /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/prepare-robot-env.sh 23:12:13 ++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' 23:12:13 +++ mktemp -d 23:12:13 ++ ROBOT_VENV=/tmp/tmp.moxBuK4IrZ 23:12:13 ++ echo ROBOT_VENV=/tmp/tmp.moxBuK4IrZ 23:12:13 +++ python3 --version 23:12:13 ++ echo 'Python version is: Python 3.6.9' 23:12:13 Python version is: Python 3.6.9 23:12:13 ++ python3 -m venv --clear /tmp/tmp.moxBuK4IrZ 23:12:14 ++ source /tmp/tmp.moxBuK4IrZ/bin/activate 23:12:14 +++ deactivate nondestructive 23:12:14 +++ '[' -n '' ']' 23:12:14 +++ '[' -n '' ']' 23:12:14 +++ '[' -n /bin/bash -o -n '' ']' 23:12:14 +++ hash -r 23:12:14 +++ '[' -n '' ']' 23:12:14 +++ unset VIRTUAL_ENV 23:12:14 +++ '[' '!' nondestructive = nondestructive ']' 23:12:14 +++ VIRTUAL_ENV=/tmp/tmp.moxBuK4IrZ 23:12:14 +++ export VIRTUAL_ENV 23:12:14 +++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 23:12:14 +++ PATH=/tmp/tmp.moxBuK4IrZ/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 23:12:14 +++ export PATH 23:12:14 +++ '[' -n '' ']' 23:12:14 +++ '[' -z '' ']' 23:12:14 +++ _OLD_VIRTUAL_PS1= 23:12:14 +++ '[' 'x(tmp.moxBuK4IrZ) ' '!=' x ']' 23:12:14 +++ PS1='(tmp.moxBuK4IrZ) ' 23:12:14 +++ export PS1 23:12:14 +++ '[' -n /bin/bash -o -n '' ']' 23:12:14 +++ hash -r 23:12:14 ++ set -exu 23:12:14 ++ python3 -m pip install -qq --upgrade 'pip<=23.0' 'setuptools<=66.1.1' 23:12:18 ++ echo 'Installing Python Requirements' 23:12:18 Installing Python Requirements 23:12:18 ++ python3 -m pip install -qq -r /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/pylibs.txt 23:12:36 ++ python3 -m pip -qq freeze 23:12:36 bcrypt==4.0.1 23:12:36 beautifulsoup4==4.12.3 23:12:36 bitarray==2.9.2 23:12:36 certifi==2024.2.2 23:12:36 cffi==1.15.1 23:12:36 charset-normalizer==2.0.12 23:12:36 cryptography==40.0.2 23:12:36 decorator==5.1.1 23:12:36 elasticsearch==7.17.9 23:12:36 elasticsearch-dsl==7.4.1 23:12:36 enum34==1.1.10 23:12:36 idna==3.6 23:12:36 importlib-resources==5.4.0 23:12:36 ipaddr==2.2.0 23:12:36 isodate==0.6.1 23:12:36 jmespath==0.10.0 23:12:36 jsonpatch==1.32 23:12:36 jsonpath-rw==1.4.0 23:12:36 jsonpointer==2.3 23:12:36 lxml==5.2.1 23:12:36 netaddr==0.8.0 23:12:36 netifaces==0.11.0 23:12:36 odltools==0.1.28 23:12:36 paramiko==3.4.0 23:12:36 pkg_resources==0.0.0 23:12:36 ply==3.11 23:12:36 pyang==2.6.0 23:12:36 pyangbind==0.8.1 23:12:36 pycparser==2.21 23:12:36 pyhocon==0.3.60 23:12:36 PyNaCl==1.5.0 23:12:36 pyparsing==3.1.2 23:12:36 python-dateutil==2.9.0.post0 23:12:36 regex==2023.8.8 23:12:36 requests==2.27.1 23:12:36 robotframework==6.1.1 23:12:36 robotframework-httplibrary==0.4.2 23:12:36 robotframework-pythonlibcore==3.0.0 23:12:36 robotframework-requests==0.9.4 23:12:36 robotframework-selenium2library==3.0.0 23:12:36 robotframework-seleniumlibrary==5.1.3 23:12:36 robotframework-sshlibrary==3.8.0 23:12:36 scapy==2.5.0 23:12:36 scp==0.14.5 23:12:36 selenium==3.141.0 23:12:36 six==1.16.0 23:12:36 soupsieve==2.3.2.post1 23:12:36 urllib3==1.26.18 23:12:36 waitress==2.0.0 23:12:36 WebOb==1.8.7 23:12:36 WebTest==3.0.0 23:12:36 zipp==3.6.0 23:12:36 ++ mkdir -p /tmp/tmp.moxBuK4IrZ/src/onap 23:12:36 ++ rm -rf /tmp/tmp.moxBuK4IrZ/src/onap/testsuite 23:12:36 ++ python3 -m pip install -qq --upgrade --extra-index-url=https://nexus3.onap.org/repository/PyPi.staging/simple 'robotframework-onap==0.6.0.*' --pre 23:12:41 ++ echo 'Installing python confluent-kafka library' 23:12:41 Installing python confluent-kafka library 23:12:41 ++ python3 -m pip install -qq confluent-kafka 23:12:43 ++ echo 'Uninstall docker-py and reinstall docker.' 23:12:43 Uninstall docker-py and reinstall docker. 23:12:43 ++ python3 -m pip uninstall -y -qq docker 23:12:43 ++ python3 -m pip install -U -qq docker 23:12:44 ++ python3 -m pip -qq freeze 23:12:44 bcrypt==4.0.1 23:12:44 beautifulsoup4==4.12.3 23:12:44 bitarray==2.9.2 23:12:44 certifi==2024.2.2 23:12:44 cffi==1.15.1 23:12:44 charset-normalizer==2.0.12 23:12:44 confluent-kafka==2.3.0 23:12:44 cryptography==40.0.2 23:12:44 decorator==5.1.1 23:12:44 deepdiff==5.7.0 23:12:44 dnspython==2.2.1 23:12:44 docker==5.0.3 23:12:44 elasticsearch==7.17.9 23:12:44 elasticsearch-dsl==7.4.1 23:12:44 enum34==1.1.10 23:12:44 future==1.0.0 23:12:44 idna==3.6 23:12:44 importlib-resources==5.4.0 23:12:44 ipaddr==2.2.0 23:12:44 isodate==0.6.1 23:12:44 Jinja2==3.0.3 23:12:44 jmespath==0.10.0 23:12:44 jsonpatch==1.32 23:12:44 jsonpath-rw==1.4.0 23:12:44 jsonpointer==2.3 23:12:44 kafka-python==2.0.2 23:12:44 lxml==5.2.1 23:12:44 MarkupSafe==2.0.1 23:12:44 more-itertools==5.0.0 23:12:44 netaddr==0.8.0 23:12:44 netifaces==0.11.0 23:12:44 odltools==0.1.28 23:12:44 ordered-set==4.0.2 23:12:44 paramiko==3.4.0 23:12:44 pbr==6.0.0 23:12:44 pkg_resources==0.0.0 23:12:44 ply==3.11 23:12:44 protobuf==3.19.6 23:12:44 pyang==2.6.0 23:12:44 pyangbind==0.8.1 23:12:44 pycparser==2.21 23:12:44 pyhocon==0.3.60 23:12:44 PyNaCl==1.5.0 23:12:44 pyparsing==3.1.2 23:12:44 python-dateutil==2.9.0.post0 23:12:44 PyYAML==6.0.1 23:12:44 regex==2023.8.8 23:12:44 requests==2.27.1 23:12:44 robotframework==6.1.1 23:12:44 robotframework-httplibrary==0.4.2 23:12:44 robotframework-onap==0.6.0.dev105 23:12:44 robotframework-pythonlibcore==3.0.0 23:12:44 robotframework-requests==0.9.4 23:12:44 robotframework-selenium2library==3.0.0 23:12:44 robotframework-seleniumlibrary==5.1.3 23:12:44 robotframework-sshlibrary==3.8.0 23:12:44 robotlibcore-temp==1.0.2 23:12:44 scapy==2.5.0 23:12:44 scp==0.14.5 23:12:44 selenium==3.141.0 23:12:44 six==1.16.0 23:12:44 soupsieve==2.3.2.post1 23:12:44 urllib3==1.26.18 23:12:44 waitress==2.0.0 23:12:44 WebOb==1.8.7 23:12:44 websocket-client==1.3.1 23:12:44 WebTest==3.0.0 23:12:44 zipp==3.6.0 23:12:44 ++ uname 23:12:44 ++ grep -q Linux 23:12:44 ++ sudo apt-get -y -qq install libxml2-utils 23:12:45 + load_set 23:12:45 + _setopts=ehuxB 23:12:45 ++ echo braceexpand:hashall:interactive-comments:nounset:xtrace 23:12:45 ++ tr : ' ' 23:12:45 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:45 + set +o braceexpand 23:12:45 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:45 + set +o hashall 23:12:45 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:45 + set +o interactive-comments 23:12:45 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:45 + set +o nounset 23:12:45 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:45 + set +o xtrace 23:12:45 ++ echo ehuxB 23:12:45 ++ sed 's/./& /g' 23:12:45 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:12:45 + set +e 23:12:45 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:12:45 + set +h 23:12:45 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:12:45 + set +u 23:12:45 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:12:45 + set +x 23:12:45 + source_safely /tmp/tmp.moxBuK4IrZ/bin/activate 23:12:45 + '[' -z /tmp/tmp.moxBuK4IrZ/bin/activate ']' 23:12:45 + relax_set 23:12:45 + set +e 23:12:45 + set +o pipefail 23:12:45 + . /tmp/tmp.moxBuK4IrZ/bin/activate 23:12:45 ++ deactivate nondestructive 23:12:45 ++ '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin ']' 23:12:45 ++ PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 23:12:45 ++ export PATH 23:12:45 ++ unset _OLD_VIRTUAL_PATH 23:12:45 ++ '[' -n '' ']' 23:12:45 ++ '[' -n /bin/bash -o -n '' ']' 23:12:45 ++ hash -r 23:12:45 ++ '[' -n '' ']' 23:12:45 ++ unset VIRTUAL_ENV 23:12:45 ++ '[' '!' nondestructive = nondestructive ']' 23:12:45 ++ VIRTUAL_ENV=/tmp/tmp.moxBuK4IrZ 23:12:45 ++ export VIRTUAL_ENV 23:12:45 ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 23:12:45 ++ PATH=/tmp/tmp.moxBuK4IrZ/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 23:12:45 ++ export PATH 23:12:45 ++ '[' -n '' ']' 23:12:45 ++ '[' -z '' ']' 23:12:45 ++ _OLD_VIRTUAL_PS1='(tmp.moxBuK4IrZ) ' 23:12:45 ++ '[' 'x(tmp.moxBuK4IrZ) ' '!=' x ']' 23:12:45 ++ PS1='(tmp.moxBuK4IrZ) (tmp.moxBuK4IrZ) ' 23:12:45 ++ export PS1 23:12:45 ++ '[' -n /bin/bash -o -n '' ']' 23:12:45 ++ hash -r 23:12:45 + load_set 23:12:45 + _setopts=hxB 23:12:45 ++ echo braceexpand:hashall:interactive-comments:xtrace 23:12:45 ++ tr : ' ' 23:12:45 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:45 + set +o braceexpand 23:12:45 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:45 + set +o hashall 23:12:45 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:45 + set +o interactive-comments 23:12:45 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:45 + set +o xtrace 23:12:45 ++ echo hxB 23:12:45 ++ sed 's/./& /g' 23:12:45 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:12:45 + set +h 23:12:45 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:12:45 + set +x 23:12:45 + export TEST_PLAN_DIR=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests 23:12:45 + TEST_PLAN_DIR=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests 23:12:45 + export TEST_OPTIONS= 23:12:45 + TEST_OPTIONS= 23:12:45 ++ mktemp -d 23:12:45 + WORKDIR=/tmp/tmp.3MdzHR9SiW 23:12:45 + cd /tmp/tmp.3MdzHR9SiW 23:12:45 + docker login -u docker -p docker nexus3.onap.org:10001 23:12:45 WARNING! Using --password via the CLI is insecure. Use --password-stdin. 23:12:45 WARNING! Your password will be stored unencrypted in /home/jenkins/.docker/config.json. 23:12:45 Configure a credential helper to remove this warning. See 23:12:45 https://docs.docker.com/engine/reference/commandline/login/#credentials-store 23:12:45 23:12:45 Login Succeeded 23:12:45 + SETUP=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh 23:12:45 + '[' -f /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh ']' 23:12:45 + echo 'Running setup script /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh' 23:12:45 Running setup script /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh 23:12:45 + source_safely /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh 23:12:45 + '[' -z /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh ']' 23:12:45 + relax_set 23:12:45 + set +e 23:12:45 + set +o pipefail 23:12:45 + . /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh 23:12:45 ++ source /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/node-templates.sh 23:12:45 +++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' 23:12:45 ++++ awk -F= '$1 == "defaultbranch" { print $2 }' /w/workspace/policy-pap-master-project-csit-pap/.gitreview 23:12:45 +++ GERRIT_BRANCH=master 23:12:45 +++ echo GERRIT_BRANCH=master 23:12:45 GERRIT_BRANCH=master 23:12:45 +++ rm -rf /w/workspace/policy-pap-master-project-csit-pap/models 23:12:45 +++ mkdir /w/workspace/policy-pap-master-project-csit-pap/models 23:12:45 +++ git clone -b master --single-branch https://github.com/onap/policy-models.git /w/workspace/policy-pap-master-project-csit-pap/models 23:12:45 Cloning into '/w/workspace/policy-pap-master-project-csit-pap/models'... 23:12:48 +++ export DATA=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies 23:12:48 +++ DATA=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies 23:12:48 +++ export NODETEMPLATES=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates 23:12:48 +++ NODETEMPLATES=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates 23:12:48 +++ sed -e 's!Measurement_vGMUX!ADifferentValue!' /w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies/vCPE.policy.monitoring.input.tosca.json 23:12:48 +++ sed -e 's!"version": "1.0.0"!"version": "2.0.0"!' -e 's!"policy-version": 1!"policy-version": 2!' /w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies/vCPE.policy.monitoring.input.tosca.json 23:12:48 ++ source /w/workspace/policy-pap-master-project-csit-pap/compose/start-compose.sh apex-pdp --grafana 23:12:48 +++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' 23:12:48 +++ COMPOSE_FOLDER=/w/workspace/policy-pap-master-project-csit-pap/compose 23:12:48 +++ grafana=false 23:12:48 +++ gui=false 23:12:48 +++ [[ 2 -gt 0 ]] 23:12:48 +++ key=apex-pdp 23:12:48 +++ case $key in 23:12:48 +++ echo apex-pdp 23:12:48 apex-pdp 23:12:48 +++ component=apex-pdp 23:12:48 +++ shift 23:12:48 +++ [[ 1 -gt 0 ]] 23:12:48 +++ key=--grafana 23:12:48 +++ case $key in 23:12:48 +++ grafana=true 23:12:48 +++ shift 23:12:48 +++ [[ 0 -gt 0 ]] 23:12:48 +++ cd /w/workspace/policy-pap-master-project-csit-pap/compose 23:12:48 +++ echo 'Configuring docker compose...' 23:12:48 Configuring docker compose... 23:12:48 +++ source export-ports.sh 23:12:48 +++ source get-versions.sh 23:12:51 +++ '[' -z pap ']' 23:12:51 +++ '[' -n apex-pdp ']' 23:12:51 +++ '[' apex-pdp == logs ']' 23:12:51 +++ '[' true = true ']' 23:12:51 +++ echo 'Starting apex-pdp application with Grafana' 23:12:51 Starting apex-pdp application with Grafana 23:12:51 +++ docker-compose up -d apex-pdp grafana 23:12:52 Creating network "compose_default" with the default driver 23:12:52 Pulling prometheus (nexus3.onap.org:10001/prom/prometheus:latest)... 23:12:52 latest: Pulling from prom/prometheus 23:12:55 Digest: sha256:dec2018ae55885fed717f25c289b8c9cff0bf5fbb9e619fb49b6161ac493c016 23:12:55 Status: Downloaded newer image for nexus3.onap.org:10001/prom/prometheus:latest 23:12:55 Pulling grafana (nexus3.onap.org:10001/grafana/grafana:latest)... 23:12:55 latest: Pulling from grafana/grafana 23:13:00 Digest: sha256:753bbb971071480d6630d3aa0d55345188c02f39456664f67c1ea443593638d0 23:13:00 Status: Downloaded newer image for nexus3.onap.org:10001/grafana/grafana:latest 23:13:00 Pulling mariadb (nexus3.onap.org:10001/mariadb:10.10.2)... 23:13:00 10.10.2: Pulling from mariadb 23:13:06 Digest: sha256:bfc25a68e113de43d0d112f5a7126df8e278579c3224e3923359e1c1d8d5ce6e 23:13:06 Status: Downloaded newer image for nexus3.onap.org:10001/mariadb:10.10.2 23:13:06 Pulling simulator (nexus3.onap.org:10001/onap/policy-models-simulator:3.1.1)... 23:13:06 3.1.1: Pulling from onap/policy-models-simulator 23:13:10 Digest: sha256:a22fada6cc93fcd88ed863d58b0b428eaaf13d3b02579e649141f6bdb5fac181 23:13:10 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-models-simulator:3.1.1 23:13:10 Pulling zookeeper (confluentinc/cp-zookeeper:latest)... 23:13:10 latest: Pulling from confluentinc/cp-zookeeper 23:13:23 Digest: sha256:9babd1c0beaf93189982bdbb9fe4bf194a2730298b640c057817746c19838866 23:13:23 Status: Downloaded newer image for confluentinc/cp-zookeeper:latest 23:13:23 Pulling kafka (confluentinc/cp-kafka:latest)... 23:13:23 latest: Pulling from confluentinc/cp-kafka 23:13:27 Digest: sha256:24cdd3a7fa89d2bed150560ebea81ff1943badfa61e51d66bb541a6b0d7fb047 23:13:27 Status: Downloaded newer image for confluentinc/cp-kafka:latest 23:13:27 Pulling policy-db-migrator (nexus3.onap.org:10001/onap/policy-db-migrator:3.1.2-SNAPSHOT)... 23:13:27 3.1.2-SNAPSHOT: Pulling from onap/policy-db-migrator 23:13:35 Digest: sha256:60a680475999b7df727a4e4ae6dd0391d3a6f4fffbde0f8c3faea985c8443c48 23:13:35 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-db-migrator:3.1.2-SNAPSHOT 23:13:35 Pulling api (nexus3.onap.org:10001/onap/policy-api:3.1.1)... 23:13:35 3.1.1: Pulling from onap/policy-api 23:13:37 Digest: sha256:73823c235d74d2500efd44b527f0e010b15469552561a2052fab717e6646a352 23:13:37 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-api:3.1.1 23:13:37 Pulling pap (nexus3.onap.org:10001/onap/policy-pap:3.1.1)... 23:13:37 3.1.1: Pulling from onap/policy-pap 23:13:38 Digest: sha256:2271905a2e80443fc6baa2f2141445192fe325d5c557920b1f4880541288e18d 23:13:38 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-pap:3.1.1 23:13:38 Pulling apex-pdp (nexus3.onap.org:10001/onap/policy-apex-pdp:3.1.2-SNAPSHOT)... 23:13:39 3.1.2-SNAPSHOT: Pulling from onap/policy-apex-pdp 23:13:45 Digest: sha256:3f9880e060c3465862043c69561fa1d43ab448175d1adf3efd53d751d3b9947d 23:13:45 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-apex-pdp:3.1.2-SNAPSHOT 23:13:46 Creating mariadb ... 23:13:46 Creating simulator ... 23:13:46 Creating prometheus ... 23:13:46 Creating compose_zookeeper_1 ... 23:13:57 Creating compose_zookeeper_1 ... done 23:13:57 Creating kafka ... 23:13:58 Creating kafka ... done 23:13:59 Creating mariadb ... done 23:13:59 Creating policy-db-migrator ... 23:14:00 Creating policy-db-migrator ... done 23:14:00 Creating policy-api ... 23:14:01 Creating policy-api ... done 23:14:01 Creating policy-pap ... 23:14:02 Creating policy-pap ... done 23:14:02 Creating simulator ... done 23:14:02 Creating policy-apex-pdp ... 23:14:03 Creating prometheus ... done 23:14:03 Creating grafana ... 23:14:04 Creating grafana ... done 23:14:06 Creating policy-apex-pdp ... done 23:14:06 +++ echo 'Prometheus server: http://localhost:30259' 23:14:06 Prometheus server: http://localhost:30259 23:14:06 +++ echo 'Grafana server: http://localhost:30269' 23:14:06 Grafana server: http://localhost:30269 23:14:06 +++ cd /w/workspace/policy-pap-master-project-csit-pap 23:14:06 ++ sleep 10 23:14:16 ++ unset http_proxy https_proxy 23:14:16 ++ bash /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/wait_for_rest.sh localhost 30003 23:14:16 Waiting for REST to come up on localhost port 30003... 23:14:16 NAMES STATUS 23:14:16 grafana Up 11 seconds 23:14:16 policy-apex-pdp Up 10 seconds 23:14:16 policy-pap Up 13 seconds 23:14:16 policy-api Up 14 seconds 23:14:16 kafka Up 17 seconds 23:14:16 compose_zookeeper_1 Up 18 seconds 23:14:16 mariadb Up 16 seconds 23:14:16 simulator Up 13 seconds 23:14:16 prometheus Up 12 seconds 23:14:21 NAMES STATUS 23:14:21 grafana Up 16 seconds 23:14:21 policy-apex-pdp Up 15 seconds 23:14:21 policy-pap Up 18 seconds 23:14:21 policy-api Up 19 seconds 23:14:21 kafka Up 22 seconds 23:14:21 compose_zookeeper_1 Up 23 seconds 23:14:21 mariadb Up 21 seconds 23:14:21 simulator Up 18 seconds 23:14:21 prometheus Up 17 seconds 23:14:26 NAMES STATUS 23:14:26 grafana Up 21 seconds 23:14:26 policy-apex-pdp Up 20 seconds 23:14:26 policy-pap Up 23 seconds 23:14:26 policy-api Up 24 seconds 23:14:26 kafka Up 27 seconds 23:14:26 compose_zookeeper_1 Up 28 seconds 23:14:26 mariadb Up 26 seconds 23:14:26 simulator Up 23 seconds 23:14:26 prometheus Up 22 seconds 23:14:31 NAMES STATUS 23:14:31 grafana Up 26 seconds 23:14:31 policy-apex-pdp Up 25 seconds 23:14:31 policy-pap Up 28 seconds 23:14:31 policy-api Up 29 seconds 23:14:31 kafka Up 32 seconds 23:14:31 compose_zookeeper_1 Up 33 seconds 23:14:31 mariadb Up 31 seconds 23:14:31 simulator Up 28 seconds 23:14:31 prometheus Up 27 seconds 23:14:36 NAMES STATUS 23:14:36 grafana Up 31 seconds 23:14:36 policy-apex-pdp Up 30 seconds 23:14:36 policy-pap Up 33 seconds 23:14:36 policy-api Up 34 seconds 23:14:36 kafka Up 37 seconds 23:14:36 compose_zookeeper_1 Up 38 seconds 23:14:36 mariadb Up 36 seconds 23:14:36 simulator Up 33 seconds 23:14:36 prometheus Up 32 seconds 23:14:41 NAMES STATUS 23:14:41 grafana Up 36 seconds 23:14:41 policy-apex-pdp Up 35 seconds 23:14:41 policy-pap Up 38 seconds 23:14:41 policy-api Up 39 seconds 23:14:41 kafka Up 42 seconds 23:14:41 compose_zookeeper_1 Up 43 seconds 23:14:41 mariadb Up 41 seconds 23:14:41 simulator Up 38 seconds 23:14:41 prometheus Up 37 seconds 23:14:41 ++ export 'SUITES=pap-test.robot 23:14:41 pap-slas.robot' 23:14:41 ++ SUITES='pap-test.robot 23:14:41 pap-slas.robot' 23:14:41 ++ ROBOT_VARIABLES='-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 23:14:41 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates' 23:14:41 + load_set 23:14:41 + _setopts=hxB 23:14:41 ++ echo braceexpand:hashall:interactive-comments:xtrace 23:14:41 ++ tr : ' ' 23:14:41 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:14:41 + set +o braceexpand 23:14:41 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:14:41 + set +o hashall 23:14:41 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:14:41 + set +o interactive-comments 23:14:41 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:14:41 + set +o xtrace 23:14:41 ++ echo hxB 23:14:41 ++ sed 's/./& /g' 23:14:41 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:14:41 + set +h 23:14:41 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:14:41 + set +x 23:14:41 + docker_stats 23:14:41 + tee /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap/_sysinfo-1-after-setup.txt 23:14:41 ++ uname -s 23:14:41 + '[' Linux == Darwin ']' 23:14:41 + sh -c 'top -bn1 | head -3' 23:14:41 top - 23:14:41 up 4 min, 0 users, load average: 3.50, 1.47, 0.57 23:14:41 Tasks: 210 total, 1 running, 131 sleeping, 0 stopped, 0 zombie 23:14:41 %Cpu(s): 14.3 us, 3.1 sy, 0.0 ni, 79.0 id, 3.4 wa, 0.0 hi, 0.1 si, 0.1 st 23:14:41 + echo 23:14:41 + sh -c 'free -h' 23:14:41 23:14:41 total used free shared buff/cache available 23:14:41 Mem: 31G 2.8G 22G 1.3M 6.2G 28G 23:14:41 Swap: 1.0G 0B 1.0G 23:14:41 + echo 23:14:41 + docker ps --format 'table {{ .Names }}\t{{ .Status }}' 23:14:41 23:14:41 NAMES STATUS 23:14:41 grafana Up 36 seconds 23:14:41 policy-apex-pdp Up 35 seconds 23:14:41 policy-pap Up 39 seconds 23:14:41 policy-api Up 40 seconds 23:14:41 kafka Up 43 seconds 23:14:41 compose_zookeeper_1 Up 43 seconds 23:14:41 mariadb Up 41 seconds 23:14:41 simulator Up 38 seconds 23:14:41 prometheus Up 37 seconds 23:14:41 + echo 23:14:41 + docker stats --no-stream 23:14:41 23:14:44 CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 23:14:44 b586b9c35606 grafana 0.04% 56.89MiB / 31.41GiB 0.18% 18.5kB / 3.44kB 0B / 24.9MB 14 23:14:44 a09374801951 policy-apex-pdp 1.12% 198.1MiB / 31.41GiB 0.62% 7.43kB / 7.05kB 0B / 0B 48 23:14:44 9d9e09b72e01 policy-pap 2.14% 619.2MiB / 31.41GiB 1.93% 30.2kB / 59.4kB 0B / 153MB 63 23:14:44 20125a8e8da8 policy-api 0.12% 569.8MiB / 31.41GiB 1.77% 1MB / 737kB 0B / 0B 56 23:14:44 5e0df65b7d20 kafka 0.70% 376.8MiB / 31.41GiB 1.17% 70.7kB / 73.3kB 0B / 475kB 83 23:14:44 ef47b8430324 compose_zookeeper_1 0.17% 98.51MiB / 31.41GiB 0.31% 57.2kB / 50.6kB 229kB / 393kB 60 23:14:44 ca41fa0aa316 mariadb 0.02% 102.1MiB / 31.41GiB 0.32% 997kB / 1.19MB 11MB / 61.3MB 39 23:14:44 0b36c069a92c simulator 0.07% 123.2MiB / 31.41GiB 0.38% 1.19kB / 0B 0B / 0B 76 23:14:44 64944c81b3ef prometheus 0.24% 18.75MiB / 31.41GiB 0.06% 55.5kB / 1.87kB 0B / 0B 10 23:14:44 + echo 23:14:44 23:14:44 + cd /tmp/tmp.3MdzHR9SiW 23:14:44 + echo 'Reading the testplan:' 23:14:44 Reading the testplan: 23:14:44 + echo 'pap-test.robot 23:14:44 pap-slas.robot' 23:14:44 + egrep -v '(^[[:space:]]*#|^[[:space:]]*$)' 23:14:44 + sed 's|^|/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/|' 23:14:44 + cat testplan.txt 23:14:44 /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot 23:14:44 /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot 23:14:44 ++ xargs 23:14:44 + SUITES='/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot' 23:14:44 + echo 'ROBOT_VARIABLES=-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 23:14:44 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates' 23:14:44 ROBOT_VARIABLES=-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 23:14:44 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates 23:14:44 + echo 'Starting Robot test suites /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot ...' 23:14:44 Starting Robot test suites /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot ... 23:14:44 + relax_set 23:14:44 + set +e 23:14:44 + set +o pipefail 23:14:44 + python3 -m robot.run -N pap -v WORKSPACE:/tmp -v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot 23:14:44 ============================================================================== 23:14:44 pap 23:14:44 ============================================================================== 23:14:44 pap.Pap-Test 23:14:44 ============================================================================== 23:14:45 LoadPolicy :: Create a policy named 'onap.restart.tca' and version... | PASS | 23:14:45 ------------------------------------------------------------------------------ 23:14:45 LoadPolicyWithMetadataSet :: Create a policy named 'operational.ap... | PASS | 23:14:45 ------------------------------------------------------------------------------ 23:14:46 LoadNodeTemplates :: Create node templates in database using speci... | PASS | 23:14:46 ------------------------------------------------------------------------------ 23:14:46 Healthcheck :: Verify policy pap health check | PASS | 23:14:46 ------------------------------------------------------------------------------ 23:15:07 Consolidated Healthcheck :: Verify policy consolidated health check | PASS | 23:15:07 ------------------------------------------------------------------------------ 23:15:07 Metrics :: Verify policy pap is exporting prometheus metrics | PASS | 23:15:07 ------------------------------------------------------------------------------ 23:15:07 AddPdpGroup :: Add a new PdpGroup named 'testGroup' in the policy ... | PASS | 23:15:07 ------------------------------------------------------------------------------ 23:15:07 QueryPdpGroupsBeforeActivation :: Verify PdpGroups before activation | PASS | 23:15:07 ------------------------------------------------------------------------------ 23:15:08 ActivatePdpGroup :: Change the state of PdpGroup named 'testGroup'... | PASS | 23:15:08 ------------------------------------------------------------------------------ 23:15:08 QueryPdpGroupsAfterActivation :: Verify PdpGroups after activation | PASS | 23:15:08 ------------------------------------------------------------------------------ 23:15:08 DeployPdpGroups :: Deploy policies in PdpGroups | PASS | 23:15:08 ------------------------------------------------------------------------------ 23:15:08 QueryPdpGroupsAfterDeploy :: Verify PdpGroups after undeploy | PASS | 23:15:08 ------------------------------------------------------------------------------ 23:15:09 QueryPolicyAuditAfterDeploy :: Verify policy audit record after de... | PASS | 23:15:09 ------------------------------------------------------------------------------ 23:15:09 QueryPolicyAuditWithMetadataSetAfterDeploy :: Verify policy audit ... | PASS | 23:15:09 ------------------------------------------------------------------------------ 23:15:09 UndeployPolicy :: Undeploy a policy named 'onap.restart.tca' from ... | PASS | 23:15:09 ------------------------------------------------------------------------------ 23:15:09 UndeployPolicyWithMetadataSet :: Undeploy a policy named 'operatio... | PASS | 23:15:09 ------------------------------------------------------------------------------ 23:15:10 QueryPdpGroupsAfterUndeploy :: Verify PdpGroups after undeploy | PASS | 23:15:10 ------------------------------------------------------------------------------ 23:15:30 QueryPolicyAuditAfterUnDeploy :: Verify policy audit record after ... | PASS | 23:15:30 ------------------------------------------------------------------------------ 23:15:30 QueryPolicyAuditWithMetadataSetAfterUnDeploy :: Verify policy audi... | PASS | 23:15:30 ------------------------------------------------------------------------------ 23:15:30 DeactivatePdpGroup :: Change the state of PdpGroup named 'testGrou... | PASS | 23:15:30 ------------------------------------------------------------------------------ 23:15:30 DeletePdpGroups :: Delete the PdpGroup named 'testGroup' from poli... | PASS | 23:15:30 ------------------------------------------------------------------------------ 23:15:30 QueryPdpGroupsAfterDelete :: Verify PdpGroups after delete | PASS | 23:15:30 ------------------------------------------------------------------------------ 23:15:30 pap.Pap-Test | PASS | 23:15:30 22 tests, 22 passed, 0 failed 23:15:30 ============================================================================== 23:15:30 pap.Pap-Slas 23:15:30 ============================================================================== 23:16:30 WaitForPrometheusServer :: Wait for Prometheus server to gather al... | PASS | 23:16:30 ------------------------------------------------------------------------------ 23:16:30 ValidateResponseTimeForHealthcheck :: Validate component healthche... | PASS | 23:16:30 ------------------------------------------------------------------------------ 23:16:30 ValidateResponseTimeForSystemHealthcheck :: Validate if system hea... | PASS | 23:16:30 ------------------------------------------------------------------------------ 23:16:30 ValidateResponseTimeQueryPolicyAudit :: Validate query audits resp... | PASS | 23:16:30 ------------------------------------------------------------------------------ 23:16:31 ValidateResponseTimeUpdateGroup :: Validate pdps/group response time | PASS | 23:16:31 ------------------------------------------------------------------------------ 23:16:31 ValidatePolicyDeploymentTime :: Check if deployment of policy is u... | PASS | 23:16:31 ------------------------------------------------------------------------------ 23:16:31 ValidateResponseTimeDeletePolicy :: Check if undeployment of polic... | PASS | 23:16:31 ------------------------------------------------------------------------------ 23:16:31 ValidateResponseTimeDeleteGroup :: Validate delete group response ... | PASS | 23:16:31 ------------------------------------------------------------------------------ 23:16:31 pap.Pap-Slas | PASS | 23:16:31 8 tests, 8 passed, 0 failed 23:16:31 ============================================================================== 23:16:31 pap | PASS | 23:16:31 30 tests, 30 passed, 0 failed 23:16:31 ============================================================================== 23:16:31 Output: /tmp/tmp.3MdzHR9SiW/output.xml 23:16:31 Log: /tmp/tmp.3MdzHR9SiW/log.html 23:16:31 Report: /tmp/tmp.3MdzHR9SiW/report.html 23:16:31 + RESULT=0 23:16:31 + load_set 23:16:31 + _setopts=hxB 23:16:31 ++ echo braceexpand:hashall:interactive-comments:xtrace 23:16:31 ++ tr : ' ' 23:16:31 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:16:31 + set +o braceexpand 23:16:31 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:16:31 + set +o hashall 23:16:31 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:16:31 + set +o interactive-comments 23:16:31 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:16:31 + set +o xtrace 23:16:31 ++ echo hxB 23:16:31 ++ sed 's/./& /g' 23:16:31 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:16:31 + set +h 23:16:31 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:16:31 + set +x 23:16:31 + echo 'RESULT: 0' 23:16:31 RESULT: 0 23:16:31 + exit 0 23:16:31 + on_exit 23:16:31 + rc=0 23:16:31 + [[ -n /w/workspace/policy-pap-master-project-csit-pap ]] 23:16:31 + docker ps --format 'table {{ .Names }}\t{{ .Status }}' 23:16:31 NAMES STATUS 23:16:31 grafana Up 2 minutes 23:16:31 policy-apex-pdp Up 2 minutes 23:16:31 policy-pap Up 2 minutes 23:16:31 policy-api Up 2 minutes 23:16:31 kafka Up 2 minutes 23:16:31 compose_zookeeper_1 Up 2 minutes 23:16:31 mariadb Up 2 minutes 23:16:31 simulator Up 2 minutes 23:16:31 prometheus Up 2 minutes 23:16:31 + docker_stats 23:16:31 ++ uname -s 23:16:31 + '[' Linux == Darwin ']' 23:16:31 + sh -c 'top -bn1 | head -3' 23:16:31 top - 23:16:31 up 6 min, 0 users, load average: 0.68, 1.08, 0.53 23:16:31 Tasks: 201 total, 1 running, 129 sleeping, 0 stopped, 0 zombie 23:16:31 %Cpu(s): 11.5 us, 2.3 sy, 0.0 ni, 83.4 id, 2.6 wa, 0.0 hi, 0.1 si, 0.1 st 23:16:31 + echo 23:16:31 23:16:31 + sh -c 'free -h' 23:16:31 total used free shared buff/cache available 23:16:31 Mem: 31G 2.9G 22G 1.3M 6.2G 28G 23:16:31 Swap: 1.0G 0B 1.0G 23:16:31 + echo 23:16:31 23:16:31 + docker ps --format 'table {{ .Names }}\t{{ .Status }}' 23:16:31 NAMES STATUS 23:16:31 grafana Up 2 minutes 23:16:31 policy-apex-pdp Up 2 minutes 23:16:31 policy-pap Up 2 minutes 23:16:31 policy-api Up 2 minutes 23:16:31 kafka Up 2 minutes 23:16:31 compose_zookeeper_1 Up 2 minutes 23:16:31 mariadb Up 2 minutes 23:16:31 simulator Up 2 minutes 23:16:31 prometheus Up 2 minutes 23:16:31 + echo 23:16:31 23:16:31 + docker stats --no-stream 23:16:34 CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 23:16:34 b586b9c35606 grafana 0.05% 63.32MiB / 31.41GiB 0.20% 19.4kB / 4.52kB 0B / 24.9MB 14 23:16:34 a09374801951 policy-apex-pdp 0.73% 186.9MiB / 31.41GiB 0.58% 57kB / 91.4kB 0B / 0B 52 23:16:34 9d9e09b72e01 policy-pap 0.59% 640.1MiB / 31.41GiB 1.99% 2.33MB / 800kB 0B / 153MB 67 23:16:34 20125a8e8da8 policy-api 0.11% 571.6MiB / 31.41GiB 1.78% 2.49MB / 1.26MB 0B / 0B 58 23:16:34 5e0df65b7d20 kafka 9.85% 387.1MiB / 31.41GiB 1.20% 241kB / 216kB 0B / 573kB 85 23:16:34 ef47b8430324 compose_zookeeper_1 0.05% 98.67MiB / 31.41GiB 0.31% 60.1kB / 52.1kB 229kB / 393kB 60 23:16:34 ca41fa0aa316 mariadb 0.01% 103.5MiB / 31.41GiB 0.32% 1.95MB / 4.77MB 11MB / 61.6MB 28 23:16:34 0b36c069a92c simulator 0.12% 123.4MiB / 31.41GiB 0.38% 1.45kB / 0B 0B / 0B 78 23:16:34 64944c81b3ef prometheus 0.00% 24MiB / 31.41GiB 0.07% 166kB / 11kB 0B / 0B 10 23:16:34 + echo 23:16:34 23:16:34 + source_safely /w/workspace/policy-pap-master-project-csit-pap/compose/stop-compose.sh 23:16:34 + '[' -z /w/workspace/policy-pap-master-project-csit-pap/compose/stop-compose.sh ']' 23:16:34 + relax_set 23:16:34 + set +e 23:16:34 + set +o pipefail 23:16:34 + . /w/workspace/policy-pap-master-project-csit-pap/compose/stop-compose.sh 23:16:34 ++ echo 'Shut down started!' 23:16:34 Shut down started! 23:16:34 ++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' 23:16:34 ++ COMPOSE_FOLDER=/w/workspace/policy-pap-master-project-csit-pap/compose 23:16:34 ++ cd /w/workspace/policy-pap-master-project-csit-pap/compose 23:16:34 ++ source export-ports.sh 23:16:34 ++ source get-versions.sh 23:16:36 ++ echo 'Collecting logs from docker compose containers...' 23:16:36 Collecting logs from docker compose containers... 23:16:36 ++ docker-compose logs 23:16:37 ++ cat docker_compose.log 23:16:37 Attaching to grafana, policy-apex-pdp, policy-pap, policy-api, policy-db-migrator, kafka, compose_zookeeper_1, mariadb, simulator, prometheus 23:16:37 zookeeper_1 | ===> User 23:16:37 zookeeper_1 | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) 23:16:37 zookeeper_1 | ===> Configuring ... 23:16:37 zookeeper_1 | ===> Running preflight checks ... 23:16:37 zookeeper_1 | ===> Check if /var/lib/zookeeper/data is writable ... 23:16:37 zookeeper_1 | ===> Check if /var/lib/zookeeper/log is writable ... 23:16:37 zookeeper_1 | ===> Launching ... 23:16:37 zookeeper_1 | ===> Launching zookeeper ... 23:16:37 zookeeper_1 | [2024-04-10 23:14:01,340] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:37 zookeeper_1 | [2024-04-10 23:14:01,348] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:37 zookeeper_1 | [2024-04-10 23:14:01,348] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:37 zookeeper_1 | [2024-04-10 23:14:01,348] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:37 zookeeper_1 | [2024-04-10 23:14:01,348] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:37 zookeeper_1 | [2024-04-10 23:14:01,350] INFO autopurge.snapRetainCount set to 3 (org.apache.zookeeper.server.DatadirCleanupManager) 23:16:37 zookeeper_1 | [2024-04-10 23:14:01,350] INFO autopurge.purgeInterval set to 0 (org.apache.zookeeper.server.DatadirCleanupManager) 23:16:37 zookeeper_1 | [2024-04-10 23:14:01,350] INFO Purge task is not scheduled. (org.apache.zookeeper.server.DatadirCleanupManager) 23:16:37 zookeeper_1 | [2024-04-10 23:14:01,350] WARN Either no config or no quorum defined in config, running in standalone mode (org.apache.zookeeper.server.quorum.QuorumPeerMain) 23:16:37 zookeeper_1 | [2024-04-10 23:14:01,351] INFO Log4j 1.2 jmx support not found; jmx disabled. (org.apache.zookeeper.jmx.ManagedUtil) 23:16:37 zookeeper_1 | [2024-04-10 23:14:01,351] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:37 zookeeper_1 | [2024-04-10 23:14:01,352] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:37 zookeeper_1 | [2024-04-10 23:14:01,352] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:37 zookeeper_1 | [2024-04-10 23:14:01,352] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:37 zookeeper_1 | [2024-04-10 23:14:01,352] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:37 zookeeper_1 | [2024-04-10 23:14:01,352] INFO Starting server (org.apache.zookeeper.server.ZooKeeperServerMain) 23:16:37 zookeeper_1 | [2024-04-10 23:14:01,363] INFO ServerMetrics initialized with provider org.apache.zookeeper.metrics.impl.DefaultMetricsProvider@26275bef (org.apache.zookeeper.server.ServerMetrics) 23:16:37 zookeeper_1 | [2024-04-10 23:14:01,366] INFO ACL digest algorithm is: SHA1 (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) 23:16:37 zookeeper_1 | [2024-04-10 23:14:01,366] INFO zookeeper.DigestAuthenticationProvider.enabled = true (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) 23:16:37 zookeeper_1 | [2024-04-10 23:14:01,369] INFO zookeeper.snapshot.trust.empty : false (org.apache.zookeeper.server.persistence.FileTxnSnapLog) 23:16:37 zookeeper_1 | [2024-04-10 23:14:01,379] INFO (org.apache.zookeeper.server.ZooKeeperServer) 23:16:37 zookeeper_1 | [2024-04-10 23:14:01,379] INFO ______ _ (org.apache.zookeeper.server.ZooKeeperServer) 23:16:37 zookeeper_1 | [2024-04-10 23:14:01,379] INFO |___ / | | (org.apache.zookeeper.server.ZooKeeperServer) 23:16:37 zookeeper_1 | [2024-04-10 23:14:01,379] INFO / / ___ ___ | | __ ___ ___ _ __ ___ _ __ (org.apache.zookeeper.server.ZooKeeperServer) 23:16:37 zookeeper_1 | [2024-04-10 23:14:01,379] INFO / / / _ \ / _ \ | |/ / / _ \ / _ \ | '_ \ / _ \ | '__| (org.apache.zookeeper.server.ZooKeeperServer) 23:16:37 zookeeper_1 | [2024-04-10 23:14:01,379] INFO / /__ | (_) | | (_) | | < | __/ | __/ | |_) | | __/ | | (org.apache.zookeeper.server.ZooKeeperServer) 23:16:37 zookeeper_1 | [2024-04-10 23:14:01,379] INFO /_____| \___/ \___/ |_|\_\ \___| \___| | .__/ \___| |_| (org.apache.zookeeper.server.ZooKeeperServer) 23:16:37 zookeeper_1 | [2024-04-10 23:14:01,379] INFO | | (org.apache.zookeeper.server.ZooKeeperServer) 23:16:37 zookeeper_1 | [2024-04-10 23:14:01,379] INFO |_| (org.apache.zookeeper.server.ZooKeeperServer) 23:16:37 zookeeper_1 | [2024-04-10 23:14:01,379] INFO (org.apache.zookeeper.server.ZooKeeperServer) 23:16:37 zookeeper_1 | [2024-04-10 23:14:01,380] INFO Server environment:zookeeper.version=3.8.3-6ad6d364c7c0bcf0de452d54ebefa3058098ab56, built on 2023-10-05 10:34 UTC (org.apache.zookeeper.server.ZooKeeperServer) 23:16:37 zookeeper_1 | [2024-04-10 23:14:01,381] INFO Server environment:host.name=ef47b8430324 (org.apache.zookeeper.server.ZooKeeperServer) 23:16:37 zookeeper_1 | [2024-04-10 23:14:01,381] INFO Server environment:java.version=11.0.21 (org.apache.zookeeper.server.ZooKeeperServer) 23:16:37 zookeeper_1 | [2024-04-10 23:14:01,381] INFO Server environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.server.ZooKeeperServer) 23:16:37 zookeeper_1 | [2024-04-10 23:14:01,381] INFO Server environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.server.ZooKeeperServer) 23:16:37 zookeeper_1 | [2024-04-10 23:14:01,381] INFO Server environment:java.class.path=/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/jose4j-0.9.3.jar:/usr/bin/../share/java/kafka/commons-validator-1.7.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.100.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/commons-io-2.11.0.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/connect-mirror-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.11.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.100.Final.jar:/usr/bin/../share/java/kafka/jline-3.22.0.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-tools-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/connect-json-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.100.Final.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.5-1.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/scala-library-2.13.11.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/reflections-0.10.2.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/commons-beanutils-1.9.4.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/connect-transforms-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/connect-api-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/commons-digester-2.1.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-raft-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.8.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/checker-qual-3.19.0.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/pcollections-4.0.1.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-clients-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/jsr305-3.0.2.jar:/usr/bin/../share/java/kafka/trogdor-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-shell-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.100.Final.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.3.jar:/usr/bin/../share/java/kafka/connect-runtime-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-storage-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.100.Final.jar:/usr/bin/../share/java/kafka/commons-collections-3.2.2.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/caffeine-2.9.3.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.1.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.3.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/netty-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.53.v20231009.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.server.ZooKeeperServer) 23:16:37 zookeeper_1 | [2024-04-10 23:14:01,381] INFO Server environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.server.ZooKeeperServer) 23:16:37 zookeeper_1 | [2024-04-10 23:14:01,381] INFO Server environment:java.io.tmpdir=/tmp (org.apache.zookeeper.server.ZooKeeperServer) 23:16:37 zookeeper_1 | [2024-04-10 23:14:01,381] INFO Server environment:java.compiler= (org.apache.zookeeper.server.ZooKeeperServer) 23:16:37 zookeeper_1 | [2024-04-10 23:14:01,381] INFO Server environment:os.name=Linux (org.apache.zookeeper.server.ZooKeeperServer) 23:16:37 zookeeper_1 | [2024-04-10 23:14:01,381] INFO Server environment:os.arch=amd64 (org.apache.zookeeper.server.ZooKeeperServer) 23:16:37 zookeeper_1 | [2024-04-10 23:14:01,381] INFO Server environment:os.version=4.15.0-192-generic (org.apache.zookeeper.server.ZooKeeperServer) 23:16:37 zookeeper_1 | [2024-04-10 23:14:01,381] INFO Server environment:user.name=appuser (org.apache.zookeeper.server.ZooKeeperServer) 23:16:37 zookeeper_1 | [2024-04-10 23:14:01,381] INFO Server environment:user.home=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) 23:16:37 zookeeper_1 | [2024-04-10 23:14:01,381] INFO Server environment:user.dir=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) 23:16:37 zookeeper_1 | [2024-04-10 23:14:01,381] INFO Server environment:os.memory.free=490MB (org.apache.zookeeper.server.ZooKeeperServer) 23:16:37 zookeeper_1 | [2024-04-10 23:14:01,381] INFO Server environment:os.memory.max=512MB (org.apache.zookeeper.server.ZooKeeperServer) 23:16:37 zookeeper_1 | [2024-04-10 23:14:01,381] INFO Server environment:os.memory.total=512MB (org.apache.zookeeper.server.ZooKeeperServer) 23:16:37 zookeeper_1 | [2024-04-10 23:14:01,381] INFO zookeeper.enableEagerACLCheck = false (org.apache.zookeeper.server.ZooKeeperServer) 23:16:37 zookeeper_1 | [2024-04-10 23:14:01,381] INFO zookeeper.digest.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) 23:16:37 zookeeper_1 | [2024-04-10 23:14:01,381] INFO zookeeper.closeSessionTxn.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) 23:16:37 zookeeper_1 | [2024-04-10 23:14:01,381] INFO zookeeper.flushDelay = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) 23:16:37 zookeeper_1 | [2024-04-10 23:14:01,381] INFO zookeeper.maxWriteQueuePollTime = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) 23:16:37 zookeeper_1 | [2024-04-10 23:14:01,381] INFO zookeeper.maxBatchSize=1000 (org.apache.zookeeper.server.ZooKeeperServer) 23:16:37 zookeeper_1 | [2024-04-10 23:14:01,381] INFO zookeeper.intBufferStartingSizeBytes = 1024 (org.apache.zookeeper.server.ZooKeeperServer) 23:16:37 zookeeper_1 | [2024-04-10 23:14:01,382] INFO Weighed connection throttling is disabled (org.apache.zookeeper.server.BlueThrottle) 23:16:37 zookeeper_1 | [2024-04-10 23:14:01,383] INFO minSessionTimeout set to 4000 ms (org.apache.zookeeper.server.ZooKeeperServer) 23:16:37 zookeeper_1 | [2024-04-10 23:14:01,383] INFO maxSessionTimeout set to 40000 ms (org.apache.zookeeper.server.ZooKeeperServer) 23:16:37 zookeeper_1 | [2024-04-10 23:14:01,384] INFO getData response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) 23:16:37 zookeeper_1 | [2024-04-10 23:14:01,384] INFO getChildren response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) 23:16:37 zookeeper_1 | [2024-04-10 23:14:01,385] INFO zookeeper.pathStats.slotCapacity = 60 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 23:16:37 zookeeper_1 | [2024-04-10 23:14:01,385] INFO zookeeper.pathStats.slotDuration = 15 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 23:16:37 zookeeper_1 | [2024-04-10 23:14:01,385] INFO zookeeper.pathStats.maxDepth = 6 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 23:16:37 zookeeper_1 | [2024-04-10 23:14:01,385] INFO zookeeper.pathStats.initialDelay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 23:16:37 zookeeper_1 | [2024-04-10 23:14:01,385] INFO zookeeper.pathStats.delay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 23:16:37 zookeeper_1 | [2024-04-10 23:14:01,385] INFO zookeeper.pathStats.enabled = false (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 23:16:37 zookeeper_1 | [2024-04-10 23:14:01,388] INFO The max bytes for all large requests are set to 104857600 (org.apache.zookeeper.server.ZooKeeperServer) 23:16:37 zookeeper_1 | [2024-04-10 23:14:01,388] INFO The large request threshold is set to -1 (org.apache.zookeeper.server.ZooKeeperServer) 23:16:37 zookeeper_1 | [2024-04-10 23:14:01,388] INFO zookeeper.enforce.auth.enabled = false (org.apache.zookeeper.server.AuthenticationHelper) 23:16:37 zookeeper_1 | [2024-04-10 23:14:01,388] INFO zookeeper.enforce.auth.schemes = [] (org.apache.zookeeper.server.AuthenticationHelper) 23:16:37 zookeeper_1 | [2024-04-10 23:14:01,388] INFO Created server with tickTime 2000 ms minSessionTimeout 4000 ms maxSessionTimeout 40000 ms clientPortListenBacklog -1 datadir /var/lib/zookeeper/log/version-2 snapdir /var/lib/zookeeper/data/version-2 (org.apache.zookeeper.server.ZooKeeperServer) 23:16:37 zookeeper_1 | [2024-04-10 23:14:01,408] INFO Logging initialized @553ms to org.eclipse.jetty.util.log.Slf4jLog (org.eclipse.jetty.util.log) 23:16:37 zookeeper_1 | [2024-04-10 23:14:01,497] WARN o.e.j.s.ServletContextHandler@5be1d0a4{/,null,STOPPED} contextPath ends with /* (org.eclipse.jetty.server.handler.ContextHandler) 23:16:37 zookeeper_1 | [2024-04-10 23:14:01,497] WARN Empty contextPath (org.eclipse.jetty.server.handler.ContextHandler) 23:16:37 zookeeper_1 | [2024-04-10 23:14:01,517] INFO jetty-9.4.53.v20231009; built: 2023-10-09T12:29:09.265Z; git: 27bde00a0b95a1d5bbee0eae7984f891d2d0f8c9; jvm 11.0.21+9-LTS (org.eclipse.jetty.server.Server) 23:16:37 zookeeper_1 | [2024-04-10 23:14:01,545] INFO DefaultSessionIdManager workerName=node0 (org.eclipse.jetty.server.session) 23:16:37 zookeeper_1 | [2024-04-10 23:14:01,545] INFO No SessionScavenger set, using defaults (org.eclipse.jetty.server.session) 23:16:37 zookeeper_1 | [2024-04-10 23:14:01,546] INFO node0 Scavenging every 600000ms (org.eclipse.jetty.server.session) 23:16:37 zookeeper_1 | [2024-04-10 23:14:01,549] WARN ServletContext@o.e.j.s.ServletContextHandler@5be1d0a4{/,null,STARTING} has uncovered http methods for path: /* (org.eclipse.jetty.security.SecurityHandler) 23:16:37 zookeeper_1 | [2024-04-10 23:14:01,557] INFO Started o.e.j.s.ServletContextHandler@5be1d0a4{/,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler) 23:16:37 zookeeper_1 | [2024-04-10 23:14:01,572] INFO Started ServerConnector@4f32a3ad{HTTP/1.1, (http/1.1)}{0.0.0.0:8080} (org.eclipse.jetty.server.AbstractConnector) 23:16:37 zookeeper_1 | [2024-04-10 23:14:01,572] INFO Started @717ms (org.eclipse.jetty.server.Server) 23:16:37 zookeeper_1 | [2024-04-10 23:14:01,572] INFO Started AdminServer on address 0.0.0.0, port 8080 and command URL /commands (org.apache.zookeeper.server.admin.JettyAdminServer) 23:16:37 zookeeper_1 | [2024-04-10 23:14:01,580] INFO Using org.apache.zookeeper.server.NIOServerCnxnFactory as server connection factory (org.apache.zookeeper.server.ServerCnxnFactory) 23:16:37 zookeeper_1 | [2024-04-10 23:14:01,581] WARN maxCnxns is not configured, using default value 0. (org.apache.zookeeper.server.ServerCnxnFactory) 23:16:37 zookeeper_1 | [2024-04-10 23:14:01,583] INFO Configuring NIO connection handler with 10s sessionless connection timeout, 2 selector thread(s), 16 worker threads, and 64 kB direct buffers. (org.apache.zookeeper.server.NIOServerCnxnFactory) 23:16:37 zookeeper_1 | [2024-04-10 23:14:01,585] INFO binding to port 0.0.0.0/0.0.0.0:2181 (org.apache.zookeeper.server.NIOServerCnxnFactory) 23:16:37 zookeeper_1 | [2024-04-10 23:14:01,609] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) 23:16:37 zookeeper_1 | [2024-04-10 23:14:01,609] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) 23:16:37 zookeeper_1 | [2024-04-10 23:14:01,611] INFO zookeeper.snapshotSizeFactor = 0.33 (org.apache.zookeeper.server.ZKDatabase) 23:16:37 zookeeper_1 | [2024-04-10 23:14:01,611] INFO zookeeper.commitLogCount=500 (org.apache.zookeeper.server.ZKDatabase) 23:16:37 zookeeper_1 | [2024-04-10 23:14:01,619] INFO zookeeper.snapshot.compression.method = CHECKED (org.apache.zookeeper.server.persistence.SnapStream) 23:16:37 zookeeper_1 | [2024-04-10 23:14:01,619] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) 23:16:37 zookeeper_1 | [2024-04-10 23:14:01,627] INFO Snapshot loaded in 16 ms, highest zxid is 0x0, digest is 1371985504 (org.apache.zookeeper.server.ZKDatabase) 23:16:37 zookeeper_1 | [2024-04-10 23:14:01,628] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) 23:16:37 zookeeper_1 | [2024-04-10 23:14:01,629] INFO Snapshot taken in 1 ms (org.apache.zookeeper.server.ZooKeeperServer) 23:16:37 zookeeper_1 | [2024-04-10 23:14:01,640] INFO zookeeper.request_throttler.shutdownTimeout = 10000 ms (org.apache.zookeeper.server.RequestThrottler) 23:16:37 zookeeper_1 | [2024-04-10 23:14:01,640] INFO PrepRequestProcessor (sid:0) started, reconfigEnabled=false (org.apache.zookeeper.server.PrepRequestProcessor) 23:16:37 zookeeper_1 | [2024-04-10 23:14:01,658] INFO Using checkIntervalMs=60000 maxPerMinute=10000 maxNeverUsedIntervalMs=0 (org.apache.zookeeper.server.ContainerManager) 23:16:37 zookeeper_1 | [2024-04-10 23:14:01,659] INFO ZooKeeper audit is disabled. (org.apache.zookeeper.audit.ZKAuditProvider) 23:16:37 zookeeper_1 | [2024-04-10 23:14:03,071] INFO Creating new log file: log.1 (org.apache.zookeeper.server.persistence.FileTxnLog) 23:16:37 kafka | ===> User 23:16:37 kafka | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) 23:16:37 kafka | ===> Configuring ... 23:16:37 kafka | Running in Zookeeper mode... 23:16:37 kafka | ===> Running preflight checks ... 23:16:37 kafka | ===> Check if /var/lib/kafka/data is writable ... 23:16:37 kafka | ===> Check if Zookeeper is healthy ... 23:16:37 kafka | SLF4J: Class path contains multiple SLF4J bindings. 23:16:37 kafka | SLF4J: Found binding in [jar:file:/usr/share/java/kafka/slf4j-reload4j-1.7.36.jar!/org/slf4j/impl/StaticLoggerBinder.class] 23:16:37 kafka | SLF4J: Found binding in [jar:file:/usr/share/java/cp-base-new/slf4j-reload4j-1.7.36.jar!/org/slf4j/impl/StaticLoggerBinder.class] 23:16:37 kafka | SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. 23:16:37 kafka | SLF4J: Actual binding is of type [org.slf4j.impl.Reload4jLoggerFactory] 23:16:37 kafka | [2024-04-10 23:14:02,996] INFO Client environment:zookeeper.version=3.8.3-6ad6d364c7c0bcf0de452d54ebefa3058098ab56, built on 2023-10-05 10:34 UTC (org.apache.zookeeper.ZooKeeper) 23:16:37 kafka | [2024-04-10 23:14:02,997] INFO Client environment:host.name=5e0df65b7d20 (org.apache.zookeeper.ZooKeeper) 23:16:37 kafka | [2024-04-10 23:14:02,997] INFO Client environment:java.version=11.0.21 (org.apache.zookeeper.ZooKeeper) 23:16:37 kafka | [2024-04-10 23:14:02,997] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) 23:16:37 kafka | [2024-04-10 23:14:02,997] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) 23:16:37 kafka | [2024-04-10 23:14:02,997] INFO Client environment:java.class.path=/usr/share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/share/java/kafka/jersey-common-2.39.1.jar:/usr/share/java/kafka/swagger-annotations-2.2.8.jar:/usr/share/java/kafka/jose4j-0.9.3.jar:/usr/share/java/kafka/commons-validator-1.7.jar:/usr/share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/share/java/kafka/netty-transport-4.1.100.Final.jar:/usr/share/java/kafka/rocksdbjni-7.9.2.jar:/usr/share/java/kafka/jackson-annotations-2.13.5.jar:/usr/share/java/kafka/commons-io-2.11.0.jar:/usr/share/java/kafka/javax.activation-api-1.2.0.jar:/usr/share/java/kafka/connect-mirror-7.6.0-ccs.jar:/usr/share/java/kafka/commons-cli-1.4.jar:/usr/share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/share/java/kafka/scala-reflect-2.13.11.jar:/usr/share/java/kafka/netty-buffer-4.1.100.Final.jar:/usr/share/java/kafka/jline-3.22.0.jar:/usr/share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/share/java/kafka/kafka-server-common-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-tools-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-log4j-appender-7.6.0-ccs.jar:/usr/share/java/kafka/connect-json-7.6.0-ccs.jar:/usr/share/java/kafka/netty-resolver-4.1.100.Final.jar:/usr/share/java/kafka/hk2-api-2.6.1.jar:/usr/share/java/kafka/zstd-jni-1.5.5-1.jar:/usr/share/java/kafka/jetty-http-9.4.53.v20231009.jar:/usr/share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/share/java/kafka/kafka.jar:/usr/share/java/kafka/jetty-servlets-9.4.53.v20231009.jar:/usr/share/java/kafka/scala-library-2.13.11.jar:/usr/share/java/kafka/jakarta.inject-2.6.1.jar:/usr/share/java/kafka/jetty-continuation-9.4.53.v20231009.jar:/usr/share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/share/java/kafka/connect-basic-auth-extension-7.6.0-ccs.jar:/usr/share/java/kafka/hk2-locator-2.6.1.jar:/usr/share/java/kafka/reflections-0.10.2.jar:/usr/share/java/kafka/slf4j-api-1.7.36.jar:/usr/share/java/kafka/jetty-server-9.4.53.v20231009.jar:/usr/share/java/kafka/paranamer-2.8.jar:/usr/share/java/kafka/commons-beanutils-1.9.4.jar:/usr/share/java/kafka/jaxb-api-2.3.1.jar:/usr/share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/share/java/kafka/hk2-utils-2.6.1.jar:/usr/share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/share/java/kafka/kafka-streams-examples-7.6.0-ccs.jar:/usr/share/java/kafka/reload4j-1.2.25.jar:/usr/share/java/kafka/connect-transforms-7.6.0-ccs.jar:/usr/share/java/kafka/jackson-core-2.13.5.jar:/usr/share/java/kafka/jersey-hk2-2.39.1.jar:/usr/share/java/kafka/jackson-databind-2.13.5.jar:/usr/share/java/kafka/jersey-client-2.39.1.jar:/usr/share/java/kafka/jetty-io-9.4.53.v20231009.jar:/usr/share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/share/java/kafka/connect-api-7.6.0-ccs.jar:/usr/share/java/kafka/commons-digester-2.1.jar:/usr/share/java/kafka/netty-transport-native-epoll-4.1.100.Final.jar:/usr/share/java/kafka/argparse4j-0.7.0.jar:/usr/share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/share/java/kafka/kafka-raft-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-streams-scala_2.13-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-group-coordinator-7.6.0-ccs.jar:/usr/share/java/kafka/audience-annotations-0.12.0.jar:/usr/share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/share/java/kafka/kafka-storage-api-7.6.0-ccs.jar:/usr/share/java/kafka/maven-artifact-3.8.8.jar:/usr/share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/share/java/kafka/kafka-metadata-7.6.0-ccs.jar:/usr/share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/share/java/kafka/jersey-server-2.39.1.jar:/usr/share/java/kafka/commons-lang3-3.8.1.jar:/usr/share/java/kafka/kafka-streams-test-utils-7.6.0-ccs.jar:/usr/share/java/kafka/jopt-simple-5.0.4.jar:/usr/share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/share/java/kafka/lz4-java-1.8.0.jar:/usr/share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/share/java/kafka/checker-qual-3.19.0.jar:/usr/share/java/kafka/connect-mirror-client-7.6.0-ccs.jar:/usr/share/java/kafka/pcollections-4.0.1.jar:/usr/share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/share/java/kafka/jetty-client-9.4.53.v20231009.jar:/usr/share/java/kafka/jetty-util-9.4.53.v20231009.jar:/usr/share/java/kafka/kafka-tools-api-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-streams-7.6.0-ccs.jar:/usr/share/java/kafka/kafka_2.13-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-clients-7.6.0-ccs.jar:/usr/share/java/kafka/commons-logging-1.2.jar:/usr/share/java/kafka/jsr305-3.0.2.jar:/usr/share/java/kafka/trogdor-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-shell-7.6.0-ccs.jar:/usr/share/java/kafka/netty-codec-4.1.100.Final.jar:/usr/share/java/kafka/netty-transport-native-unix-common-4.1.100.Final.jar:/usr/share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/share/java/kafka/metrics-core-2.2.0.jar:/usr/share/java/kafka/zookeeper-jute-3.8.3.jar:/usr/share/java/kafka/connect-runtime-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-storage-7.6.0-ccs.jar:/usr/share/java/kafka/netty-handler-4.1.100.Final.jar:/usr/share/java/kafka/commons-collections-3.2.2.jar:/usr/share/java/kafka/javassist-3.29.2-GA.jar:/usr/share/java/kafka/caffeine-2.9.3.jar:/usr/share/java/kafka/plexus-utils-3.3.1.jar:/usr/share/java/kafka/zookeeper-3.8.3.jar:/usr/share/java/kafka/activation-1.1.1.jar:/usr/share/java/kafka/netty-common-4.1.100.Final.jar:/usr/share/java/kafka/metrics-core-4.1.12.1.jar:/usr/share/java/kafka/jetty-servlet-9.4.53.v20231009.jar:/usr/share/java/kafka/jetty-util-ajax-9.4.53.v20231009.jar:/usr/share/java/kafka/snappy-java-1.1.10.5.jar:/usr/share/java/kafka/netty-transport-classes-epoll-4.1.100.Final.jar:/usr/share/java/kafka/jetty-security-9.4.53.v20231009.jar:/usr/share/java/cp-base-new/scala-logging_2.13-3.9.4.jar:/usr/share/java/cp-base-new/jackson-datatype-jdk8-2.14.2.jar:/usr/share/java/cp-base-new/jose4j-0.9.3.jar:/usr/share/java/cp-base-new/commons-validator-1.7.jar:/usr/share/java/cp-base-new/logredactor-1.0.12.jar:/usr/share/java/cp-base-new/jolokia-core-1.7.1.jar:/usr/share/java/cp-base-new/re2j-1.6.jar:/usr/share/java/cp-base-new/scala-library-2.13.10.jar:/usr/share/java/cp-base-new/commons-io-2.11.0.jar:/usr/share/java/cp-base-new/commons-cli-1.4.jar:/usr/share/java/cp-base-new/slf4j-reload4j-1.7.36.jar:/usr/share/java/cp-base-new/jackson-annotations-2.14.2.jar:/usr/share/java/cp-base-new/json-simple-1.1.1.jar:/usr/share/java/cp-base-new/jackson-module-scala_2.13-2.14.2.jar:/usr/share/java/cp-base-new/scala-java8-compat_2.13-1.0.2.jar:/usr/share/java/cp-base-new/kafka-server-common-7.6.0-ccs.jar:/usr/share/java/cp-base-new/minimal-json-0.9.5.jar:/usr/share/java/cp-base-new/zstd-jni-1.5.5-1.jar:/usr/share/java/cp-base-new/jackson-dataformat-yaml-2.14.2.jar:/usr/share/java/cp-base-new/slf4j-api-1.7.36.jar:/usr/share/java/cp-base-new/paranamer-2.8.jar:/usr/share/java/cp-base-new/commons-beanutils-1.9.4.jar:/usr/share/java/cp-base-new/jmx_prometheus_javaagent-0.18.0.jar:/usr/share/java/cp-base-new/reload4j-1.2.25.jar:/usr/share/java/cp-base-new/jackson-core-2.14.2.jar:/usr/share/java/cp-base-new/common-utils-7.6.0.jar:/usr/share/java/cp-base-new/commons-digester-2.1.jar:/usr/share/java/cp-base-new/argparse4j-0.7.0.jar:/usr/share/java/cp-base-new/kafka-raft-7.6.0-ccs.jar:/usr/share/java/cp-base-new/kafka-group-coordinator-7.6.0-ccs.jar:/usr/share/java/cp-base-new/audience-annotations-0.12.0.jar:/usr/share/java/cp-base-new/gson-2.9.0.jar:/usr/share/java/cp-base-new/snakeyaml-2.0.jar:/usr/share/java/cp-base-new/kafka-storage-api-7.6.0-ccs.jar:/usr/share/java/cp-base-new/kafka-metadata-7.6.0-ccs.jar:/usr/share/java/cp-base-new/disk-usage-agent-7.6.0.jar:/usr/share/java/cp-base-new/jopt-simple-5.0.4.jar:/usr/share/java/cp-base-new/error_prone_annotations-2.10.0.jar:/usr/share/java/cp-base-new/lz4-java-1.8.0.jar:/usr/share/java/cp-base-new/logredactor-metrics-1.0.12.jar:/usr/share/java/cp-base-new/checker-qual-3.19.0.jar:/usr/share/java/cp-base-new/pcollections-4.0.1.jar:/usr/share/java/cp-base-new/kafka-tools-api-7.6.0-ccs.jar:/usr/share/java/cp-base-new/kafka_2.13-7.6.0-ccs.jar:/usr/share/java/cp-base-new/kafka-clients-7.6.0-ccs.jar:/usr/share/java/cp-base-new/commons-logging-1.2.jar:/usr/share/java/cp-base-new/scala-reflect-2.13.10.jar:/usr/share/java/cp-base-new/scala-collection-compat_2.13-2.10.0.jar:/usr/share/java/cp-base-new/metrics-core-2.2.0.jar:/usr/share/java/cp-base-new/zookeeper-jute-3.8.3.jar:/usr/share/java/cp-base-new/jackson-dataformat-csv-2.14.2.jar:/usr/share/java/cp-base-new/utility-belt-7.6.0.jar:/usr/share/java/cp-base-new/kafka-storage-7.6.0-ccs.jar:/usr/share/java/cp-base-new/commons-collections-3.2.2.jar:/usr/share/java/cp-base-new/caffeine-2.9.3.jar:/usr/share/java/cp-base-new/jolokia-jvm-1.7.1.jar:/usr/share/java/cp-base-new/zookeeper-3.8.3.jar:/usr/share/java/cp-base-new/metrics-core-4.1.12.1.jar:/usr/share/java/cp-base-new/jackson-databind-2.14.2.jar:/usr/share/java/cp-base-new/snappy-java-1.1.10.5.jar (org.apache.zookeeper.ZooKeeper) 23:16:37 kafka | [2024-04-10 23:14:02,997] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) 23:16:37 kafka | [2024-04-10 23:14:02,997] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) 23:16:37 kafka | [2024-04-10 23:14:02,997] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) 23:16:37 kafka | [2024-04-10 23:14:02,997] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) 23:16:37 kafka | [2024-04-10 23:14:02,997] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) 23:16:37 kafka | [2024-04-10 23:14:02,997] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) 23:16:37 kafka | [2024-04-10 23:14:02,997] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) 23:16:37 kafka | [2024-04-10 23:14:02,997] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) 23:16:37 kafka | [2024-04-10 23:14:02,997] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) 23:16:37 kafka | [2024-04-10 23:14:02,997] INFO Client environment:os.memory.free=487MB (org.apache.zookeeper.ZooKeeper) 23:16:37 kafka | [2024-04-10 23:14:02,997] INFO Client environment:os.memory.max=8042MB (org.apache.zookeeper.ZooKeeper) 23:16:37 kafka | [2024-04-10 23:14:02,998] INFO Client environment:os.memory.total=504MB (org.apache.zookeeper.ZooKeeper) 23:16:37 kafka | [2024-04-10 23:14:03,000] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=40000 watcher=io.confluent.admin.utils.ZookeeperConnectionWatcher@2fd6b6c7 (org.apache.zookeeper.ZooKeeper) 23:16:37 kafka | [2024-04-10 23:14:03,004] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) 23:16:37 kafka | [2024-04-10 23:14:03,008] INFO jute.maxbuffer value is 1048575 Bytes (org.apache.zookeeper.ClientCnxnSocket) 23:16:37 kafka | [2024-04-10 23:14:03,015] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) 23:16:37 kafka | [2024-04-10 23:14:03,039] INFO Opening socket connection to server zookeeper/172.17.0.2:2181. (org.apache.zookeeper.ClientCnxn) 23:16:37 kafka | [2024-04-10 23:14:03,040] INFO SASL config status: Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn) 23:16:37 kafka | [2024-04-10 23:14:03,051] INFO Socket connection established, initiating session, client: /172.17.0.6:53118, server: zookeeper/172.17.0.2:2181 (org.apache.zookeeper.ClientCnxn) 23:16:37 kafka | [2024-04-10 23:14:03,098] INFO Session establishment complete on server zookeeper/172.17.0.2:2181, session id = 0x10000037a3e0000, negotiated timeout = 40000 (org.apache.zookeeper.ClientCnxn) 23:16:37 kafka | [2024-04-10 23:14:03,223] INFO Session: 0x10000037a3e0000 closed (org.apache.zookeeper.ZooKeeper) 23:16:37 kafka | [2024-04-10 23:14:03,223] INFO EventThread shut down for session: 0x10000037a3e0000 (org.apache.zookeeper.ClientCnxn) 23:16:37 kafka | Using log4j config /etc/kafka/log4j.properties 23:16:37 kafka | ===> Launching ... 23:16:37 kafka | ===> Launching kafka ... 23:16:37 kafka | [2024-04-10 23:14:04,046] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$) 23:16:37 kafka | [2024-04-10 23:14:04,387] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) 23:16:37 kafka | [2024-04-10 23:14:04,483] INFO Registered signal handlers for TERM, INT, HUP (org.apache.kafka.common.utils.LoggingSignalHandler) 23:16:37 kafka | [2024-04-10 23:14:04,484] INFO starting (kafka.server.KafkaServer) 23:16:37 kafka | [2024-04-10 23:14:04,485] INFO Connecting to zookeeper on zookeeper:2181 (kafka.server.KafkaServer) 23:16:37 kafka | [2024-04-10 23:14:04,498] INFO [ZooKeeperClient Kafka server] Initializing a new session to zookeeper:2181. (kafka.zookeeper.ZooKeeperClient) 23:16:37 kafka | [2024-04-10 23:14:04,502] INFO Client environment:zookeeper.version=3.8.3-6ad6d364c7c0bcf0de452d54ebefa3058098ab56, built on 2023-10-05 10:34 UTC (org.apache.zookeeper.ZooKeeper) 23:16:37 kafka | [2024-04-10 23:14:04,502] INFO Client environment:host.name=5e0df65b7d20 (org.apache.zookeeper.ZooKeeper) 23:16:37 kafka | [2024-04-10 23:14:04,502] INFO Client environment:java.version=11.0.21 (org.apache.zookeeper.ZooKeeper) 23:16:37 kafka | [2024-04-10 23:14:04,502] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) 23:16:37 kafka | [2024-04-10 23:14:04,502] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) 23:16:37 kafka | [2024-04-10 23:14:04,502] INFO Client environment:java.class.path=/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/jose4j-0.9.3.jar:/usr/bin/../share/java/kafka/commons-validator-1.7.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.100.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/commons-io-2.11.0.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/connect-mirror-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.11.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.100.Final.jar:/usr/bin/../share/java/kafka/jline-3.22.0.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-tools-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/connect-json-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.100.Final.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.5-1.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/scala-library-2.13.11.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/reflections-0.10.2.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/commons-beanutils-1.9.4.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/connect-transforms-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/connect-api-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/commons-digester-2.1.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-raft-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.8.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/checker-qual-3.19.0.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/pcollections-4.0.1.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-clients-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/jsr305-3.0.2.jar:/usr/bin/../share/java/kafka/trogdor-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-shell-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.100.Final.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.3.jar:/usr/bin/../share/java/kafka/connect-runtime-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-storage-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.100.Final.jar:/usr/bin/../share/java/kafka/commons-collections-3.2.2.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/caffeine-2.9.3.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.1.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.3.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/netty-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.53.v20231009.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.ZooKeeper) 23:16:37 kafka | [2024-04-10 23:14:04,502] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) 23:16:37 kafka | [2024-04-10 23:14:04,503] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) 23:16:37 kafka | [2024-04-10 23:14:04,503] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) 23:16:37 kafka | [2024-04-10 23:14:04,503] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) 23:16:37 kafka | [2024-04-10 23:14:04,503] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) 23:16:37 kafka | [2024-04-10 23:14:04,503] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) 23:16:37 kafka | [2024-04-10 23:14:04,503] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) 23:16:37 kafka | [2024-04-10 23:14:04,503] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) 23:16:37 kafka | [2024-04-10 23:14:04,503] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) 23:16:37 kafka | [2024-04-10 23:14:04,503] INFO Client environment:os.memory.free=1007MB (org.apache.zookeeper.ZooKeeper) 23:16:37 kafka | [2024-04-10 23:14:04,503] INFO Client environment:os.memory.max=1024MB (org.apache.zookeeper.ZooKeeper) 23:16:37 kafka | [2024-04-10 23:14:04,503] INFO Client environment:os.memory.total=1024MB (org.apache.zookeeper.ZooKeeper) 23:16:37 kafka | [2024-04-10 23:14:04,505] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=18000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@1f6c9cd8 (org.apache.zookeeper.ZooKeeper) 23:16:37 kafka | [2024-04-10 23:14:04,509] INFO jute.maxbuffer value is 4194304 Bytes (org.apache.zookeeper.ClientCnxnSocket) 23:16:37 kafka | [2024-04-10 23:14:04,515] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) 23:16:37 kafka | [2024-04-10 23:14:04,524] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient) 23:16:37 kafka | [2024-04-10 23:14:04,529] INFO Opening socket connection to server zookeeper/172.17.0.2:2181. (org.apache.zookeeper.ClientCnxn) 23:16:37 kafka | [2024-04-10 23:14:04,535] INFO Socket connection established, initiating session, client: /172.17.0.6:53120, server: zookeeper/172.17.0.2:2181 (org.apache.zookeeper.ClientCnxn) 23:16:37 kafka | [2024-04-10 23:14:04,546] INFO Session establishment complete on server zookeeper/172.17.0.2:2181, session id = 0x10000037a3e0001, negotiated timeout = 18000 (org.apache.zookeeper.ClientCnxn) 23:16:37 kafka | [2024-04-10 23:14:04,552] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient) 23:16:37 kafka | [2024-04-10 23:14:04,935] INFO Cluster ID = mAFlxob1QoSnxAKM2SbgkA (kafka.server.KafkaServer) 23:16:37 kafka | [2024-04-10 23:14:04,939] WARN No meta.properties file under dir /var/lib/kafka/data/meta.properties (kafka.server.BrokerMetadataCheckpoint) 23:16:37 kafka | [2024-04-10 23:14:04,991] INFO KafkaConfig values: 23:16:37 kafka | advertised.listeners = PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092 23:16:37 kafka | alter.config.policy.class.name = null 23:16:37 kafka | alter.log.dirs.replication.quota.window.num = 11 23:16:37 kafka | alter.log.dirs.replication.quota.window.size.seconds = 1 23:16:37 kafka | authorizer.class.name = 23:16:37 kafka | auto.create.topics.enable = true 23:16:37 kafka | auto.include.jmx.reporter = true 23:16:37 kafka | auto.leader.rebalance.enable = true 23:16:37 kafka | background.threads = 10 23:16:37 kafka | broker.heartbeat.interval.ms = 2000 23:16:37 kafka | broker.id = 1 23:16:37 kafka | broker.id.generation.enable = true 23:16:37 kafka | broker.rack = null 23:16:37 kafka | broker.session.timeout.ms = 9000 23:16:37 kafka | client.quota.callback.class = null 23:16:37 kafka | compression.type = producer 23:16:37 kafka | connection.failed.authentication.delay.ms = 100 23:16:37 kafka | connections.max.idle.ms = 600000 23:16:37 kafka | connections.max.reauth.ms = 0 23:16:37 kafka | control.plane.listener.name = null 23:16:37 kafka | controlled.shutdown.enable = true 23:16:37 kafka | controlled.shutdown.max.retries = 3 23:16:37 kafka | controlled.shutdown.retry.backoff.ms = 5000 23:16:37 kafka | controller.listener.names = null 23:16:37 kafka | controller.quorum.append.linger.ms = 25 23:16:37 kafka | controller.quorum.election.backoff.max.ms = 1000 23:16:37 kafka | controller.quorum.election.timeout.ms = 1000 23:16:37 kafka | controller.quorum.fetch.timeout.ms = 2000 23:16:37 kafka | controller.quorum.request.timeout.ms = 2000 23:16:37 kafka | controller.quorum.retry.backoff.ms = 20 23:16:37 kafka | controller.quorum.voters = [] 23:16:37 kafka | controller.quota.window.num = 11 23:16:37 kafka | controller.quota.window.size.seconds = 1 23:16:37 kafka | controller.socket.timeout.ms = 30000 23:16:37 kafka | create.topic.policy.class.name = null 23:16:37 kafka | default.replication.factor = 1 23:16:37 kafka | delegation.token.expiry.check.interval.ms = 3600000 23:16:37 kafka | delegation.token.expiry.time.ms = 86400000 23:16:37 kafka | delegation.token.master.key = null 23:16:37 kafka | delegation.token.max.lifetime.ms = 604800000 23:16:37 kafka | delegation.token.secret.key = null 23:16:37 kafka | delete.records.purgatory.purge.interval.requests = 1 23:16:37 kafka | delete.topic.enable = true 23:16:37 kafka | early.start.listeners = null 23:16:37 kafka | fetch.max.bytes = 57671680 23:16:37 kafka | fetch.purgatory.purge.interval.requests = 1000 23:16:37 kafka | group.consumer.assignors = [org.apache.kafka.coordinator.group.assignor.RangeAssignor] 23:16:37 kafka | group.consumer.heartbeat.interval.ms = 5000 23:16:37 kafka | group.consumer.max.heartbeat.interval.ms = 15000 23:16:37 kafka | group.consumer.max.session.timeout.ms = 60000 23:16:37 kafka | group.consumer.max.size = 2147483647 23:16:37 kafka | group.consumer.min.heartbeat.interval.ms = 5000 23:16:37 kafka | group.consumer.min.session.timeout.ms = 45000 23:16:37 kafka | group.consumer.session.timeout.ms = 45000 23:16:37 kafka | group.coordinator.new.enable = false 23:16:37 kafka | group.coordinator.threads = 1 23:16:37 kafka | group.initial.rebalance.delay.ms = 3000 23:16:37 kafka | group.max.session.timeout.ms = 1800000 23:16:37 kafka | group.max.size = 2147483647 23:16:37 kafka | group.min.session.timeout.ms = 6000 23:16:37 kafka | initial.broker.registration.timeout.ms = 60000 23:16:37 kafka | inter.broker.listener.name = PLAINTEXT 23:16:37 kafka | inter.broker.protocol.version = 3.6-IV2 23:16:37 kafka | kafka.metrics.polling.interval.secs = 10 23:16:37 kafka | kafka.metrics.reporters = [] 23:16:37 kafka | leader.imbalance.check.interval.seconds = 300 23:16:37 kafka | leader.imbalance.per.broker.percentage = 10 23:16:37 kafka | listener.security.protocol.map = PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT 23:16:37 kafka | listeners = PLAINTEXT://0.0.0.0:9092,PLAINTEXT_HOST://0.0.0.0:29092 23:16:37 kafka | log.cleaner.backoff.ms = 15000 23:16:37 kafka | log.cleaner.dedupe.buffer.size = 134217728 23:16:37 kafka | log.cleaner.delete.retention.ms = 86400000 23:16:37 kafka | log.cleaner.enable = true 23:16:37 kafka | log.cleaner.io.buffer.load.factor = 0.9 23:16:37 kafka | log.cleaner.io.buffer.size = 524288 23:16:37 kafka | log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 23:16:37 kafka | log.cleaner.max.compaction.lag.ms = 9223372036854775807 23:16:37 kafka | log.cleaner.min.cleanable.ratio = 0.5 23:16:37 kafka | log.cleaner.min.compaction.lag.ms = 0 23:16:37 kafka | log.cleaner.threads = 1 23:16:37 kafka | log.cleanup.policy = [delete] 23:16:37 kafka | log.dir = /tmp/kafka-logs 23:16:37 kafka | log.dirs = /var/lib/kafka/data 23:16:37 kafka | log.flush.interval.messages = 9223372036854775807 23:16:37 kafka | log.flush.interval.ms = null 23:16:37 kafka | log.flush.offset.checkpoint.interval.ms = 60000 23:16:37 kafka | log.flush.scheduler.interval.ms = 9223372036854775807 23:16:37 kafka | log.flush.start.offset.checkpoint.interval.ms = 60000 23:16:37 kafka | log.index.interval.bytes = 4096 23:16:37 kafka | log.index.size.max.bytes = 10485760 23:16:37 kafka | log.local.retention.bytes = -2 23:16:37 kafka | log.local.retention.ms = -2 23:16:37 kafka | log.message.downconversion.enable = true 23:16:37 mariadb | 2024-04-10 23:13:59+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.10.2+maria~ubu2204 started. 23:16:37 mariadb | 2024-04-10 23:13:59+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql' 23:16:37 mariadb | 2024-04-10 23:13:59+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.10.2+maria~ubu2204 started. 23:16:37 mariadb | 2024-04-10 23:14:00+00:00 [Note] [Entrypoint]: Initializing database files 23:16:37 mariadb | 2024-04-10 23:14:00 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) 23:16:37 mariadb | 2024-04-10 23:14:00 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF 23:16:37 mariadb | 2024-04-10 23:14:00 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. 23:16:37 mariadb | 23:16:37 mariadb | 23:16:37 mariadb | PLEASE REMEMBER TO SET A PASSWORD FOR THE MariaDB root USER ! 23:16:37 mariadb | To do so, start the server, then issue the following command: 23:16:37 mariadb | 23:16:37 mariadb | '/usr/bin/mysql_secure_installation' 23:16:37 mariadb | 23:16:37 mariadb | which will also give you the option of removing the test 23:16:37 mariadb | databases and anonymous user created by default. This is 23:16:37 mariadb | strongly recommended for production servers. 23:16:37 mariadb | 23:16:37 mariadb | See the MariaDB Knowledgebase at https://mariadb.com/kb 23:16:37 mariadb | 23:16:37 mariadb | Please report any problems at https://mariadb.org/jira 23:16:37 mariadb | 23:16:37 mariadb | The latest information about MariaDB is available at https://mariadb.org/. 23:16:37 mariadb | 23:16:37 mariadb | Consider joining MariaDB's strong and vibrant community: 23:16:37 mariadb | https://mariadb.org/get-involved/ 23:16:37 mariadb | 23:16:37 mariadb | 2024-04-10 23:14:01+00:00 [Note] [Entrypoint]: Database files initialized 23:16:37 mariadb | 2024-04-10 23:14:01+00:00 [Note] [Entrypoint]: Starting temporary server 23:16:37 mariadb | 2024-04-10 23:14:01+00:00 [Note] [Entrypoint]: Waiting for server startup 23:16:37 mariadb | 2024-04-10 23:14:01 0 [Note] mariadbd (server 10.10.2-MariaDB-1:10.10.2+maria~ubu2204) starting as process 96 ... 23:16:37 mariadb | 2024-04-10 23:14:01 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 23:16:37 mariadb | 2024-04-10 23:14:01 0 [Note] InnoDB: Number of transaction pools: 1 23:16:37 mariadb | 2024-04-10 23:14:01 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions 23:16:37 mariadb | 2024-04-10 23:14:01 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts) 23:16:37 mariadb | 2024-04-10 23:14:01 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) 23:16:37 mariadb | 2024-04-10 23:14:01 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF 23:16:37 mariadb | 2024-04-10 23:14:01 0 [Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB 23:16:37 mariadb | 2024-04-10 23:14:01 0 [Note] InnoDB: Completed initialization of buffer pool 23:16:37 mariadb | 2024-04-10 23:14:01 0 [Note] InnoDB: File system buffers for log disabled (block size=512 bytes) 23:16:37 mariadb | 2024-04-10 23:14:01 0 [Note] InnoDB: 128 rollback segments are active. 23:16:37 mariadb | 2024-04-10 23:14:01 0 [Note] InnoDB: Setting file './ibtmp1' size to 12.000MiB. Physically writing the file full; Please wait ... 23:16:37 mariadb | 2024-04-10 23:14:01 0 [Note] InnoDB: File './ibtmp1' size is now 12.000MiB. 23:16:37 mariadb | 2024-04-10 23:14:01 0 [Note] InnoDB: log sequence number 46590; transaction id 14 23:16:37 mariadb | 2024-04-10 23:14:01 0 [Note] Plugin 'FEEDBACK' is disabled. 23:16:37 mariadb | 2024-04-10 23:14:01 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. 23:16:37 mariadb | 2024-04-10 23:14:01 0 [Warning] 'user' entry 'root@mariadb' ignored in --skip-name-resolve mode. 23:16:37 mariadb | 2024-04-10 23:14:01 0 [Warning] 'proxies_priv' entry '@% root@mariadb' ignored in --skip-name-resolve mode. 23:16:37 mariadb | 2024-04-10 23:14:01 0 [Note] mariadbd: ready for connections. 23:16:37 mariadb | Version: '10.10.2-MariaDB-1:10.10.2+maria~ubu2204' socket: '/run/mysqld/mysqld.sock' port: 0 mariadb.org binary distribution 23:16:37 mariadb | 2024-04-10 23:14:02+00:00 [Note] [Entrypoint]: Temporary server started. 23:16:37 mariadb | 2024-04-10 23:14:04+00:00 [Note] [Entrypoint]: Creating user policy_user 23:16:37 mariadb | 2024-04-10 23:14:04+00:00 [Note] [Entrypoint]: Securing system users (equivalent to running mysql_secure_installation) 23:16:37 mariadb | 23:16:37 mariadb | 2024-04-10 23:14:04+00:00 [Warn] [Entrypoint]: /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/db.conf 23:16:37 mariadb | 23:16:37 mariadb | 2024-04-10 23:14:04+00:00 [Note] [Entrypoint]: /usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/db.sh 23:16:37 mariadb | #!/bin/bash -xv 23:16:37 mariadb | # Copyright 2019,2021 AT&T Intellectual Property. All rights reserved 23:16:37 mariadb | # Modifications Copyright (c) 2022 Nordix Foundation. 23:16:37 mariadb | # 23:16:37 grafana | logger=settings t=2024-04-10T23:14:05.034870935Z level=info msg="Starting Grafana" version=10.4.1 commit=d94d597d847c05085542c29dfad6b3f469cc77e1 branch=v10.4.x compiled=2024-04-10T23:14:05Z 23:16:37 grafana | logger=settings t=2024-04-10T23:14:05.035308937Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini 23:16:37 grafana | logger=settings t=2024-04-10T23:14:05.035331407Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini 23:16:37 grafana | logger=settings t=2024-04-10T23:14:05.035337018Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana" 23:16:37 grafana | logger=settings t=2024-04-10T23:14:05.035342288Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana" 23:16:37 grafana | logger=settings t=2024-04-10T23:14:05.035346818Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins" 23:16:37 grafana | logger=settings t=2024-04-10T23:14:05.035350658Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning" 23:16:37 grafana | logger=settings t=2024-04-10T23:14:05.035357768Z level=info msg="Config overridden from command line" arg="default.log.mode=console" 23:16:37 grafana | logger=settings t=2024-04-10T23:14:05.035399639Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana" 23:16:37 grafana | logger=settings t=2024-04-10T23:14:05.03541064Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana" 23:16:37 grafana | logger=settings t=2024-04-10T23:14:05.03542022Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins" 23:16:37 grafana | logger=settings t=2024-04-10T23:14:05.03542999Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning" 23:16:37 grafana | logger=settings t=2024-04-10T23:14:05.03543549Z level=info msg=Target target=[all] 23:16:37 grafana | logger=settings t=2024-04-10T23:14:05.035487082Z level=info msg="Path Home" path=/usr/share/grafana 23:16:37 grafana | logger=settings t=2024-04-10T23:14:05.035493322Z level=info msg="Path Data" path=/var/lib/grafana 23:16:37 grafana | logger=settings t=2024-04-10T23:14:05.035498722Z level=info msg="Path Logs" path=/var/log/grafana 23:16:37 grafana | logger=settings t=2024-04-10T23:14:05.035531683Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins 23:16:37 grafana | logger=settings t=2024-04-10T23:14:05.035544143Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning 23:16:37 grafana | logger=settings t=2024-04-10T23:14:05.035550553Z level=info msg="App mode production" 23:16:37 grafana | logger=sqlstore t=2024-04-10T23:14:05.035998956Z level=info msg="Connecting to DB" dbtype=sqlite3 23:16:37 grafana | logger=sqlstore t=2024-04-10T23:14:05.036061768Z level=info msg="Creating SQLite database file" path=/var/lib/grafana/grafana.db 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.037309512Z level=info msg="Starting DB migrations" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.038903296Z level=info msg="Executing migration" id="create migration_log table" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.040138269Z level=info msg="Migration successfully executed" id="create migration_log table" duration=1.234393ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.043440779Z level=info msg="Executing migration" id="create user table" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.044107347Z level=info msg="Migration successfully executed" id="create user table" duration=666.088µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.04784759Z level=info msg="Executing migration" id="add unique index user.login" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.048524808Z level=info msg="Migration successfully executed" id="add unique index user.login" duration=677.088µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.053047822Z level=info msg="Executing migration" id="add unique index user.email" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.056450274Z level=info msg="Migration successfully executed" id="add unique index user.email" duration=3.401792ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.062931061Z level=info msg="Executing migration" id="drop index UQE_user_login - v1" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.064469173Z level=info msg="Migration successfully executed" id="drop index UQE_user_login - v1" duration=1.537612ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.069962843Z level=info msg="Executing migration" id="drop index UQE_user_email - v1" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.070926499Z level=info msg="Migration successfully executed" id="drop index UQE_user_email - v1" duration=963.486µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.075858004Z level=info msg="Executing migration" id="Rename table user to user_v1 - v1" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.078497406Z level=info msg="Migration successfully executed" id="Rename table user to user_v1 - v1" duration=2.638372ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.106927022Z level=info msg="Executing migration" id="create user table v2" 23:16:37 mariadb | # Licensed under the Apache License, Version 2.0 (the "License"); 23:16:37 mariadb | # you may not use this file except in compliance with the License. 23:16:37 mariadb | # You may obtain a copy of the License at 23:16:37 mariadb | # 23:16:37 mariadb | # http://www.apache.org/licenses/LICENSE-2.0 23:16:37 mariadb | # 23:16:37 mariadb | # Unless required by applicable law or agreed to in writing, software 23:16:37 mariadb | # distributed under the License is distributed on an "AS IS" BASIS, 23:16:37 mariadb | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 23:16:37 mariadb | # See the License for the specific language governing permissions and 23:16:37 mariadb | # limitations under the License. 23:16:37 mariadb | 23:16:37 mariadb | for db in migration pooling policyadmin operationshistory clampacm policyclamp 23:16:37 mariadb | do 23:16:37 mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "CREATE DATABASE IF NOT EXISTS ${db};" 23:16:37 mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "GRANT ALL PRIVILEGES ON \`${db}\`.* TO '${MYSQL_USER}'@'%' ;" 23:16:37 mariadb | done 23:16:37 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 23:16:37 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS migration;' 23:16:37 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `migration`.* TO '\''policy_user'\''@'\''%'\'' ;' 23:16:37 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 23:16:37 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS pooling;' 23:16:37 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `pooling`.* TO '\''policy_user'\''@'\''%'\'' ;' 23:16:37 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 23:16:37 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS policyadmin;' 23:16:37 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `policyadmin`.* TO '\''policy_user'\''@'\''%'\'' ;' 23:16:37 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 23:16:37 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS operationshistory;' 23:16:37 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `operationshistory`.* TO '\''policy_user'\''@'\''%'\'' ;' 23:16:37 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 23:16:37 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS clampacm;' 23:16:37 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `clampacm`.* TO '\''policy_user'\''@'\''%'\'' ;' 23:16:37 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 23:16:37 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS policyclamp;' 23:16:37 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `policyclamp`.* TO '\''policy_user'\''@'\''%'\'' ;' 23:16:37 mariadb | 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.107689223Z level=info msg="Migration successfully executed" id="create user table v2" duration=763.891µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.112030972Z level=info msg="Executing migration" id="create index UQE_user_login - v2" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.112684289Z level=info msg="Migration successfully executed" id="create index UQE_user_login - v2" duration=650.767µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.118317223Z level=info msg="Executing migration" id="create index UQE_user_email - v2" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.118862988Z level=info msg="Migration successfully executed" id="create index UQE_user_email - v2" duration=545.715µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.123016832Z level=info msg="Executing migration" id="copy data_source v1 to v2" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.123903416Z level=info msg="Migration successfully executed" id="copy data_source v1 to v2" duration=885.423µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.128957474Z level=info msg="Executing migration" id="Drop old table user_v1" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.130111205Z level=info msg="Migration successfully executed" id="Drop old table user_v1" duration=1.153181ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.136338795Z level=info msg="Executing migration" id="Add column help_flags1 to user table" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.137660891Z level=info msg="Migration successfully executed" id="Add column help_flags1 to user table" duration=1.321206ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.142240027Z level=info msg="Executing migration" id="Update user table charset" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.142273487Z level=info msg="Migration successfully executed" id="Update user table charset" duration=33.622µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.146931094Z level=info msg="Executing migration" id="Add last_seen_at column to user" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.149286698Z level=info msg="Migration successfully executed" id="Add last_seen_at column to user" duration=2.353674ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.157425191Z level=info msg="Executing migration" id="Add missing user data" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.157706508Z level=info msg="Migration successfully executed" id="Add missing user data" duration=284.988µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.165690426Z level=info msg="Executing migration" id="Add is_disabled column to user" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.166834087Z level=info msg="Migration successfully executed" id="Add is_disabled column to user" duration=1.143171ms 23:16:37 kafka | log.message.format.version = 3.0-IV1 23:16:37 kafka | log.message.timestamp.after.max.ms = 9223372036854775807 23:16:37 kafka | log.message.timestamp.before.max.ms = 9223372036854775807 23:16:37 kafka | log.message.timestamp.difference.max.ms = 9223372036854775807 23:16:37 kafka | log.message.timestamp.type = CreateTime 23:16:37 kafka | log.preallocate = false 23:16:37 kafka | log.retention.bytes = -1 23:16:37 kafka | log.retention.check.interval.ms = 300000 23:16:37 kafka | log.retention.hours = 168 23:16:37 kafka | log.retention.minutes = null 23:16:37 kafka | log.retention.ms = null 23:16:37 kafka | log.roll.hours = 168 23:16:37 kafka | log.roll.jitter.hours = 0 23:16:37 kafka | log.roll.jitter.ms = null 23:16:37 kafka | log.roll.ms = null 23:16:37 kafka | log.segment.bytes = 1073741824 23:16:37 kafka | log.segment.delete.delay.ms = 60000 23:16:37 kafka | max.connection.creation.rate = 2147483647 23:16:37 kafka | max.connections = 2147483647 23:16:37 kafka | max.connections.per.ip = 2147483647 23:16:37 kafka | max.connections.per.ip.overrides = 23:16:37 kafka | max.incremental.fetch.session.cache.slots = 1000 23:16:37 kafka | message.max.bytes = 1048588 23:16:37 kafka | metadata.log.dir = null 23:16:37 kafka | metadata.log.max.record.bytes.between.snapshots = 20971520 23:16:37 kafka | metadata.log.max.snapshot.interval.ms = 3600000 23:16:37 kafka | metadata.log.segment.bytes = 1073741824 23:16:37 kafka | metadata.log.segment.min.bytes = 8388608 23:16:37 kafka | metadata.log.segment.ms = 604800000 23:16:37 kafka | metadata.max.idle.interval.ms = 500 23:16:37 kafka | metadata.max.retention.bytes = 104857600 23:16:37 kafka | metadata.max.retention.ms = 604800000 23:16:37 kafka | metric.reporters = [] 23:16:37 kafka | metrics.num.samples = 2 23:16:37 kafka | metrics.recording.level = INFO 23:16:37 kafka | metrics.sample.window.ms = 30000 23:16:37 kafka | min.insync.replicas = 1 23:16:37 kafka | node.id = 1 23:16:37 kafka | num.io.threads = 8 23:16:37 kafka | num.network.threads = 3 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.172962104Z level=info msg="Executing migration" id="Add index user.login/user.email" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.173807128Z level=info msg="Migration successfully executed" id="Add index user.login/user.email" duration=849.074µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.176609764Z level=info msg="Executing migration" id="Add is_service_account column to user" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.177838338Z level=info msg="Migration successfully executed" id="Add is_service_account column to user" duration=1.228144ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.180651164Z level=info msg="Executing migration" id="Update is_service_account column to nullable" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.187718298Z level=info msg="Migration successfully executed" id="Update is_service_account column to nullable" duration=7.067514ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.191020468Z level=info msg="Executing migration" id="Add uid column to user" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.191866001Z level=info msg="Migration successfully executed" id="Add uid column to user" duration=845.102µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.197073013Z level=info msg="Executing migration" id="Update uid column values for users" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.197283538Z level=info msg="Migration successfully executed" id="Update uid column values for users" duration=210.555µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.200200768Z level=info msg="Executing migration" id="Add unique index user_uid" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.200895937Z level=info msg="Migration successfully executed" id="Add unique index user_uid" duration=695.029µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.204139096Z level=info msg="Executing migration" id="update login field with orgid to allow for multiple service accounts with same name across orgs" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.204438784Z level=info msg="Migration successfully executed" id="update login field with orgid to allow for multiple service accounts with same name across orgs" duration=299.848µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.208295429Z level=info msg="Executing migration" id="create temp user table v1-7" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.209106641Z level=info msg="Migration successfully executed" id="create temp user table v1-7" duration=807.822µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.214618961Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v1-7" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.215714892Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v1-7" duration=1.092741ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.220113802Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v1-7" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.221290104Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v1-7" duration=1.175763ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.226490156Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v1-7" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.227175615Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v1-7" duration=684.869µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.231547504Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v1-7" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.232628324Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v1-7" duration=1.080361ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.236641583Z level=info msg="Executing migration" id="Update temp_user table charset" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.236679554Z level=info msg="Migration successfully executed" id="Update temp_user table charset" duration=39.271µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.241251169Z level=info msg="Executing migration" id="drop index IDX_temp_user_email - v1" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.241874126Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_email - v1" duration=623.147µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.246289917Z level=info msg="Executing migration" id="drop index IDX_temp_user_org_id - v1" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.247125499Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_org_id - v1" duration=833.592µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.251097727Z level=info msg="Executing migration" id="drop index IDX_temp_user_code - v1" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.252637Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_code - v1" duration=1.537323ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.256203987Z level=info msg="Executing migration" id="drop index IDX_temp_user_status - v1" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.256837754Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_status - v1" duration=632.997µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.261930623Z level=info msg="Executing migration" id="Rename table temp_user to temp_user_tmp_qwerty - v1" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.264851923Z level=info msg="Migration successfully executed" id="Rename table temp_user to temp_user_tmp_qwerty - v1" duration=2.92045ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.268131412Z level=info msg="Executing migration" id="create temp_user v2" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.268960215Z level=info msg="Migration successfully executed" id="create temp_user v2" duration=828.363µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.272505562Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v2" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.273252342Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v2" duration=746.56µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.277978051Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v2" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.279091301Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v2" duration=1.11236ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.282759912Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v2" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.283862412Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v2" duration=1.09933ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.287423259Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v2" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.28855076Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v2" duration=1.126731ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.293690711Z level=info msg="Executing migration" id="copy temp_user v1 to v2" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.294055391Z level=info msg="Migration successfully executed" id="copy temp_user v1 to v2" duration=365.179µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.297370591Z level=info msg="Executing migration" id="drop temp_user_tmp_qwerty" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.297842094Z level=info msg="Migration successfully executed" id="drop temp_user_tmp_qwerty" duration=471.382µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.301595556Z level=info msg="Executing migration" id="Set created for temp users that will otherwise prematurely expire" 23:16:37 mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "FLUSH PRIVILEGES;" 23:16:37 mariadb | + mysql -uroot -psecret --execute 'FLUSH PRIVILEGES;' 23:16:37 mariadb | mysql -u"${MYSQL_USER}" -p"${MYSQL_PASSWORD}" -f policyclamp < /tmp/policy-clamp-create-tables.sql 23:16:37 mariadb | + mysql -upolicy_user -ppolicy_user -f policyclamp 23:16:37 mariadb | 23:16:37 mariadb | 2024-04-10 23:14:05+00:00 [Note] [Entrypoint]: Stopping temporary server 23:16:37 mariadb | 2024-04-10 23:14:05 0 [Note] mariadbd (initiated by: unknown): Normal shutdown 23:16:37 mariadb | 2024-04-10 23:14:05 0 [Note] InnoDB: FTS optimize thread exiting. 23:16:37 mariadb | 2024-04-10 23:14:05 0 [Note] InnoDB: Starting shutdown... 23:16:37 mariadb | 2024-04-10 23:14:05 0 [Note] InnoDB: Dumping buffer pool(s) to /var/lib/mysql/ib_buffer_pool 23:16:37 mariadb | 2024-04-10 23:14:05 0 [Note] InnoDB: Buffer pool(s) dump completed at 240410 23:14:05 23:16:37 mariadb | 2024-04-10 23:14:05 0 [Note] InnoDB: Removed temporary tablespace data file: "./ibtmp1" 23:16:37 mariadb | 2024-04-10 23:14:05 0 [Note] InnoDB: Shutdown completed; log sequence number 340012; transaction id 298 23:16:37 mariadb | 2024-04-10 23:14:05 0 [Note] mariadbd: Shutdown complete 23:16:37 mariadb | 23:16:37 mariadb | 2024-04-10 23:14:05+00:00 [Note] [Entrypoint]: Temporary server stopped 23:16:37 mariadb | 23:16:37 mariadb | 2024-04-10 23:14:05+00:00 [Note] [Entrypoint]: MariaDB init process done. Ready for start up. 23:16:37 mariadb | 23:16:37 mariadb | 2024-04-10 23:14:05 0 [Note] mariadbd (server 10.10.2-MariaDB-1:10.10.2+maria~ubu2204) starting as process 1 ... 23:16:37 mariadb | 2024-04-10 23:14:05 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 23:16:37 mariadb | 2024-04-10 23:14:05 0 [Note] InnoDB: Number of transaction pools: 1 23:16:37 mariadb | 2024-04-10 23:14:05 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions 23:16:37 mariadb | 2024-04-10 23:14:05 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts) 23:16:37 mariadb | 2024-04-10 23:14:05 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) 23:16:37 mariadb | 2024-04-10 23:14:05 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF 23:16:37 mariadb | 2024-04-10 23:14:05 0 [Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB 23:16:37 mariadb | 2024-04-10 23:14:05 0 [Note] InnoDB: Completed initialization of buffer pool 23:16:37 mariadb | 2024-04-10 23:14:05 0 [Note] InnoDB: File system buffers for log disabled (block size=512 bytes) 23:16:37 mariadb | 2024-04-10 23:14:05 0 [Note] InnoDB: 128 rollback segments are active. 23:16:37 mariadb | 2024-04-10 23:14:05 0 [Note] InnoDB: Setting file './ibtmp1' size to 12.000MiB. Physically writing the file full; Please wait ... 23:16:37 mariadb | 2024-04-10 23:14:05 0 [Note] InnoDB: File './ibtmp1' size is now 12.000MiB. 23:16:37 mariadb | 2024-04-10 23:14:05 0 [Note] InnoDB: log sequence number 340012; transaction id 299 23:16:37 mariadb | 2024-04-10 23:14:05 0 [Note] Plugin 'FEEDBACK' is disabled. 23:16:37 mariadb | 2024-04-10 23:14:05 0 [Note] InnoDB: Loading buffer pool(s) from /var/lib/mysql/ib_buffer_pool 23:16:37 mariadb | 2024-04-10 23:14:05 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. 23:16:37 mariadb | 2024-04-10 23:14:05 0 [Warning] You need to use --log-bin to make --expire-logs-days or --binlog-expire-logs-seconds work. 23:16:37 mariadb | 2024-04-10 23:14:05 0 [Note] Server socket created on IP: '0.0.0.0'. 23:16:37 mariadb | 2024-04-10 23:14:05 0 [Note] Server socket created on IP: '::'. 23:16:37 mariadb | 2024-04-10 23:14:05 0 [Note] mariadbd: ready for connections. 23:16:37 mariadb | Version: '10.10.2-MariaDB-1:10.10.2+maria~ubu2204' socket: '/run/mysqld/mysqld.sock' port: 3306 mariadb.org binary distribution 23:16:37 mariadb | 2024-04-10 23:14:05 0 [Note] InnoDB: Buffer pool(s) load completed at 240410 23:14:05 23:16:37 mariadb | 2024-04-10 23:14:06 3 [Warning] Aborted connection 3 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.10' (This connection closed normally without authentication) 23:16:37 mariadb | 2024-04-10 23:14:06 4 [Warning] Aborted connection 4 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.9' (This connection closed normally without authentication) 23:16:37 mariadb | 2024-04-10 23:14:06 5 [Warning] Aborted connection 5 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.8' (This connection closed normally without authentication) 23:16:37 mariadb | 2024-04-10 23:14:06 6 [Warning] Aborted connection 6 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.7' (This connection closed normally without authentication) 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.302117861Z level=info msg="Migration successfully executed" id="Set created for temp users that will otherwise prematurely expire" duration=522.745µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.305878383Z level=info msg="Executing migration" id="create star table" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.306823519Z level=info msg="Migration successfully executed" id="create star table" duration=944.265µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.311531617Z level=info msg="Executing migration" id="add unique index star.user_id_dashboard_id" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.312273977Z level=info msg="Migration successfully executed" id="add unique index star.user_id_dashboard_id" duration=742.56µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.316256817Z level=info msg="Executing migration" id="create org table v1" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.317272014Z level=info msg="Migration successfully executed" id="create org table v1" duration=1.011307ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.321043617Z level=info msg="Executing migration" id="create index UQE_org_name - v1" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.322194919Z level=info msg="Migration successfully executed" id="create index UQE_org_name - v1" duration=1.150692ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.327237106Z level=info msg="Executing migration" id="create org_user table v1" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.327943586Z level=info msg="Migration successfully executed" id="create org_user table v1" duration=706.569µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.331589495Z level=info msg="Executing migration" id="create index IDX_org_user_org_id - v1" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.332305764Z level=info msg="Migration successfully executed" id="create index IDX_org_user_org_id - v1" duration=715.889µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.336025046Z level=info msg="Executing migration" id="create index UQE_org_user_org_id_user_id - v1" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.337222388Z level=info msg="Migration successfully executed" id="create index UQE_org_user_org_id_user_id - v1" duration=1.196032ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.341336581Z level=info msg="Executing migration" id="create index IDX_org_user_user_id - v1" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.342462352Z level=info msg="Migration successfully executed" id="create index IDX_org_user_user_id - v1" duration=1.124661ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.347251133Z level=info msg="Executing migration" id="Update org table charset" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.347292974Z level=info msg="Migration successfully executed" id="Update org table charset" duration=42.401µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.351306603Z level=info msg="Executing migration" id="Update org_user table charset" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.351330054Z level=info msg="Migration successfully executed" id="Update org_user table charset" duration=23.651µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.354812758Z level=info msg="Executing migration" id="Migrate all Read Only Viewers to Viewers" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.354977243Z level=info msg="Migration successfully executed" id="Migrate all Read Only Viewers to Viewers" duration=164.645µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.358288074Z level=info msg="Executing migration" id="create dashboard table" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.35925013Z level=info msg="Migration successfully executed" id="create dashboard table" duration=960.546µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.363784074Z level=info msg="Executing migration" id="add index dashboard.account_id" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.365015567Z level=info msg="Migration successfully executed" id="add index dashboard.account_id" duration=1.229023ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.36878132Z level=info msg="Executing migration" id="add unique index dashboard_account_id_slug" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.369602063Z level=info msg="Migration successfully executed" id="add unique index dashboard_account_id_slug" duration=819.693µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.372970265Z level=info msg="Executing migration" id="create dashboard_tag table" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.373629762Z level=info msg="Migration successfully executed" id="create dashboard_tag table" duration=660.458µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.382772592Z level=info msg="Executing migration" id="add unique index dashboard_tag.dasboard_id_term" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.383534193Z level=info msg="Migration successfully executed" id="add unique index dashboard_tag.dasboard_id_term" duration=761.061µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.393935986Z level=info msg="Executing migration" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.395964002Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" duration=2.034306ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.402388607Z level=info msg="Executing migration" id="Rename table dashboard to dashboard_v1 - v1" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.407319942Z level=info msg="Migration successfully executed" id="Rename table dashboard to dashboard_v1 - v1" duration=4.930905ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.414485158Z level=info msg="Executing migration" id="create dashboard v2" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.415108155Z level=info msg="Migration successfully executed" id="create dashboard v2" duration=622.777µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.430983598Z level=info msg="Executing migration" id="create index IDX_dashboard_org_id - v2" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.43176755Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_org_id - v2" duration=783.792µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.462223951Z level=info msg="Executing migration" id="create index UQE_dashboard_org_id_slug - v2" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.464176654Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_org_id_slug - v2" duration=1.954663ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.477331363Z level=info msg="Executing migration" id="copy dashboard v1 to v2" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.478279439Z level=info msg="Migration successfully executed" id="copy dashboard v1 to v2" duration=962.426µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.489675811Z level=info msg="Executing migration" id="drop table dashboard_v1" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.491077718Z level=info msg="Migration successfully executed" id="drop table dashboard_v1" duration=1.401647ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.53035139Z level=info msg="Executing migration" id="alter dashboard.data to mediumtext v1" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.530605377Z level=info msg="Migration successfully executed" id="alter dashboard.data to mediumtext v1" duration=273.337µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.535628565Z level=info msg="Executing migration" id="Add column updated_by in dashboard - v2" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.53838494Z level=info msg="Migration successfully executed" id="Add column updated_by in dashboard - v2" duration=2.757865ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.542142192Z level=info msg="Executing migration" id="Add column created_by in dashboard - v2" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.544103456Z level=info msg="Migration successfully executed" id="Add column created_by in dashboard - v2" duration=1.960484ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.550944352Z level=info msg="Executing migration" id="Add column gnetId in dashboard" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.553539403Z level=info msg="Migration successfully executed" id="Add column gnetId in dashboard" duration=2.596031ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.557628655Z level=info msg="Executing migration" id="Add index for gnetId in dashboard" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.55891789Z level=info msg="Migration successfully executed" id="Add index for gnetId in dashboard" duration=1.288655ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.563732902Z level=info msg="Executing migration" id="Add column plugin_id in dashboard" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.565894251Z level=info msg="Migration successfully executed" id="Add column plugin_id in dashboard" duration=2.18279ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.571962017Z level=info msg="Executing migration" id="Add index for plugin_id in dashboard" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.573625361Z level=info msg="Migration successfully executed" id="Add index for plugin_id in dashboard" duration=1.663635ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.578161476Z level=info msg="Executing migration" id="Add index for dashboard_id in dashboard_tag" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.579628536Z level=info msg="Migration successfully executed" id="Add index for dashboard_id in dashboard_tag" duration=1.468341ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.583625285Z level=info msg="Executing migration" id="Update dashboard table charset" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.583658286Z level=info msg="Migration successfully executed" id="Update dashboard table charset" duration=33.571µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.589096374Z level=info msg="Executing migration" id="Update dashboard_tag table charset" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.589123465Z level=info msg="Migration successfully executed" id="Update dashboard_tag table charset" duration=28.131µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.592551868Z level=info msg="Executing migration" id="Add column folder_id in dashboard" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.595682344Z level=info msg="Migration successfully executed" id="Add column folder_id in dashboard" duration=3.129435ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.599227241Z level=info msg="Executing migration" id="Add column isFolder in dashboard" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.60213436Z level=info msg="Migration successfully executed" id="Add column isFolder in dashboard" duration=2.906739ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.606824248Z level=info msg="Executing migration" id="Add column has_acl in dashboard" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.608843003Z level=info msg="Migration successfully executed" id="Add column has_acl in dashboard" duration=2.018385ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.613293905Z level=info msg="Executing migration" id="Add column uid in dashboard" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.61535016Z level=info msg="Migration successfully executed" id="Add column uid in dashboard" duration=2.055515ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.618995641Z level=info msg="Executing migration" id="Update uid column values in dashboard" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.619261118Z level=info msg="Migration successfully executed" id="Update uid column values in dashboard" duration=264.707µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.622805994Z level=info msg="Executing migration" id="Add unique index dashboard_org_id_uid" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.623708499Z level=info msg="Migration successfully executed" id="Add unique index dashboard_org_id_uid" duration=903.675µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.629871948Z level=info msg="Executing migration" id="Remove unique index org_id_slug" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.631214974Z level=info msg="Migration successfully executed" id="Remove unique index org_id_slug" duration=1.342906ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.635979594Z level=info msg="Executing migration" id="Update dashboard title length" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.636006785Z level=info msg="Migration successfully executed" id="Update dashboard title length" duration=28.061µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.640726243Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_title_folder_id" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.641620428Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_title_folder_id" duration=892.955µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.652054422Z level=info msg="Executing migration" id="create dashboard_provisioning" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.653667177Z level=info msg="Migration successfully executed" id="create dashboard_provisioning" duration=1.611955ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.657628794Z level=info msg="Executing migration" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.664496942Z level=info msg="Migration successfully executed" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" duration=6.96722ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.671600086Z level=info msg="Executing migration" id="create dashboard_provisioning v2" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.672429959Z level=info msg="Migration successfully executed" id="create dashboard_provisioning v2" duration=829.353µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.679004358Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id - v2" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.67982271Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id - v2" duration=818.012µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.683925693Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.684768406Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" duration=842.153µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.688613041Z level=info msg="Executing migration" id="copy dashboard_provisioning v1 to v2" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.689108635Z level=info msg="Migration successfully executed" id="copy dashboard_provisioning v1 to v2" duration=495.403µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.69592054Z level=info msg="Executing migration" id="drop dashboard_provisioning_tmp_qwerty" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.696763213Z level=info msg="Migration successfully executed" id="drop dashboard_provisioning_tmp_qwerty" duration=842.303µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.702550782Z level=info msg="Executing migration" id="Add check_sum column" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.705830691Z level=info msg="Migration successfully executed" id="Add check_sum column" duration=3.280029ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.709899982Z level=info msg="Executing migration" id="Add index for dashboard_title" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.710741804Z level=info msg="Migration successfully executed" id="Add index for dashboard_title" duration=839.912µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.716031029Z level=info msg="Executing migration" id="delete tags for deleted dashboards" 23:16:37 policy-db-migrator | Waiting for mariadb port 3306... 23:16:37 policy-db-migrator | nc: connect to mariadb (172.17.0.4) port 3306 (tcp) failed: Connection refused 23:16:37 policy-db-migrator | nc: connect to mariadb (172.17.0.4) port 3306 (tcp) failed: Connection refused 23:16:37 policy-db-migrator | nc: connect to mariadb (172.17.0.4) port 3306 (tcp) failed: Connection refused 23:16:37 policy-db-migrator | nc: connect to mariadb (172.17.0.4) port 3306 (tcp) failed: Connection refused 23:16:37 policy-db-migrator | nc: connect to mariadb (172.17.0.4) port 3306 (tcp) failed: Connection refused 23:16:37 policy-db-migrator | nc: connect to mariadb (172.17.0.4) port 3306 (tcp) failed: Connection refused 23:16:37 policy-db-migrator | Connection to mariadb (172.17.0.4) 3306 port [tcp/mysql] succeeded! 23:16:37 policy-db-migrator | 321 blocks 23:16:37 policy-db-migrator | Preparing upgrade release version: 0800 23:16:37 policy-db-migrator | Preparing upgrade release version: 0900 23:16:37 policy-db-migrator | Preparing upgrade release version: 1000 23:16:37 policy-db-migrator | Preparing upgrade release version: 1100 23:16:37 policy-db-migrator | Preparing upgrade release version: 1200 23:16:37 policy-db-migrator | Preparing upgrade release version: 1300 23:16:37 policy-db-migrator | Done 23:16:37 policy-db-migrator | name version 23:16:37 policy-db-migrator | policyadmin 0 23:16:37 policy-db-migrator | policyadmin: upgrade available: 0 -> 1300 23:16:37 policy-db-migrator | upgrade: 0 -> 1300 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | > upgrade 0100-jpapdpgroup_properties.sql 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpgroup_properties (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PROPERTIES VARCHAR(255) DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) DEFAULT NULL) 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpstatistics_enginestats (AVERAGEEXECUTIONTIME DOUBLE DEFAULT NULL, ENGINEID VARCHAR(255) DEFAULT NULL, ENGINETIMESTAMP BIGINT DEFAULT NULL, ENGINEWORKERSTATE INT DEFAULT NULL, EVENTCOUNT BIGINT DEFAULT NULL, LASTENTERTIME BIGINT DEFAULT NULL, LASTEXECUTIONTIME BIGINT DEFAULT NULL, LASTSTART BIGINT DEFAULT NULL, UPTIME BIGINT DEFAULT NULL, timeStamp datetime DEFAULT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL) 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | > upgrade 0120-jpapdpsubgroup_policies.sql 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_policies (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL) 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | > upgrade 0130-jpapdpsubgroup_properties.sql 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_properties (parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL, PROPERTIES VARCHAR(255) DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) DEFAULT NULL) 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | > upgrade 0140-jpapdpsubgroup_supportedpolicytypes.sql 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_supportedpolicytypes (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL) 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | > upgrade 0150-jpatoscacapabilityassignment_attributes.sql 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_attributes (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, ATTRIBUTES LONGTEXT DEFAULT NULL, ATTRIBUTES_KEY VARCHAR(255) DEFAULT NULL) 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | > upgrade 0160-jpatoscacapabilityassignment_metadata.sql 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | > upgrade 0170-jpatoscacapabilityassignment_occurrences.sql 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_occurrences (name VARCHAR(120) NULL, version VARCHAR(20) NULL, OCCURRENCES INT DEFAULT NULL) 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | > upgrade 0180-jpatoscacapabilityassignment_properties.sql 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | > upgrade 0190-jpatoscacapabilitytype_metadata.sql 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilitytype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | > upgrade 0200-jpatoscacapabilitytype_properties.sql 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilitytype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | > upgrade 0210-jpatoscadatatype_constraints.sql 23:16:37 policy-db-migrator | -------------- 23:16:37 kafka | num.partitions = 1 23:16:37 kafka | num.recovery.threads.per.data.dir = 1 23:16:37 kafka | num.replica.alter.log.dirs.threads = null 23:16:37 kafka | num.replica.fetchers = 1 23:16:37 kafka | offset.metadata.max.bytes = 4096 23:16:37 kafka | offsets.commit.required.acks = -1 23:16:37 kafka | offsets.commit.timeout.ms = 5000 23:16:37 kafka | offsets.load.buffer.size = 5242880 23:16:37 kafka | offsets.retention.check.interval.ms = 600000 23:16:37 kafka | offsets.retention.minutes = 10080 23:16:37 kafka | offsets.topic.compression.codec = 0 23:16:37 kafka | offsets.topic.num.partitions = 50 23:16:37 kafka | offsets.topic.replication.factor = 1 23:16:37 kafka | offsets.topic.segment.bytes = 104857600 23:16:37 kafka | password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding 23:16:37 kafka | password.encoder.iterations = 4096 23:16:37 kafka | password.encoder.key.length = 128 23:16:37 kafka | password.encoder.keyfactory.algorithm = null 23:16:37 kafka | password.encoder.old.secret = null 23:16:37 kafka | password.encoder.secret = null 23:16:37 kafka | principal.builder.class = class org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder 23:16:37 kafka | process.roles = [] 23:16:37 kafka | producer.id.expiration.check.interval.ms = 600000 23:16:37 kafka | producer.id.expiration.ms = 86400000 23:16:37 kafka | producer.purgatory.purge.interval.requests = 1000 23:16:37 kafka | queued.max.request.bytes = -1 23:16:37 kafka | queued.max.requests = 500 23:16:37 kafka | quota.window.num = 11 23:16:37 kafka | quota.window.size.seconds = 1 23:16:37 kafka | remote.log.index.file.cache.total.size.bytes = 1073741824 23:16:37 kafka | remote.log.manager.task.interval.ms = 30000 23:16:37 kafka | remote.log.manager.task.retry.backoff.max.ms = 30000 23:16:37 kafka | remote.log.manager.task.retry.backoff.ms = 500 23:16:37 kafka | remote.log.manager.task.retry.jitter = 0.2 23:16:37 kafka | remote.log.manager.thread.pool.size = 10 23:16:37 kafka | remote.log.metadata.custom.metadata.max.bytes = 128 23:16:37 kafka | remote.log.metadata.manager.class.name = org.apache.kafka.server.log.remote.metadata.storage.TopicBasedRemoteLogMetadataManager 23:16:37 kafka | remote.log.metadata.manager.class.path = null 23:16:37 kafka | remote.log.metadata.manager.impl.prefix = rlmm.config. 23:16:37 kafka | remote.log.metadata.manager.listener.name = null 23:16:37 kafka | remote.log.reader.max.pending.tasks = 100 23:16:37 kafka | remote.log.reader.threads = 10 23:16:37 kafka | remote.log.storage.manager.class.name = null 23:16:37 kafka | remote.log.storage.manager.class.path = null 23:16:37 kafka | remote.log.storage.manager.impl.prefix = rsm.config. 23:16:37 kafka | remote.log.storage.system.enable = false 23:16:37 kafka | replica.fetch.backoff.ms = 1000 23:16:37 kafka | replica.fetch.max.bytes = 1048576 23:16:37 kafka | replica.fetch.min.bytes = 1 23:16:37 kafka | replica.fetch.response.max.bytes = 10485760 23:16:37 kafka | replica.fetch.wait.max.ms = 500 23:16:37 kafka | replica.high.watermark.checkpoint.interval.ms = 5000 23:16:37 kafka | replica.lag.time.max.ms = 30000 23:16:37 kafka | replica.selector.class = null 23:16:37 kafka | replica.socket.receive.buffer.bytes = 65536 23:16:37 kafka | replica.socket.timeout.ms = 30000 23:16:37 kafka | replication.quota.window.num = 11 23:16:37 kafka | replication.quota.window.size.seconds = 1 23:16:37 kafka | request.timeout.ms = 30000 23:16:37 kafka | reserved.broker.max.id = 1000 23:16:37 kafka | sasl.client.callback.handler.class = null 23:16:37 kafka | sasl.enabled.mechanisms = [GSSAPI] 23:16:37 kafka | sasl.jaas.config = null 23:16:37 kafka | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:37 kafka | sasl.kerberos.min.time.before.relogin = 60000 23:16:37 kafka | sasl.kerberos.principal.to.local.rules = [DEFAULT] 23:16:37 kafka | sasl.kerberos.service.name = null 23:16:37 kafka | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:37 kafka | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:37 kafka | sasl.login.callback.handler.class = null 23:16:37 kafka | sasl.login.class = null 23:16:37 kafka | sasl.login.connect.timeout.ms = null 23:16:37 kafka | sasl.login.read.timeout.ms = null 23:16:37 kafka | sasl.login.refresh.buffer.seconds = 300 23:16:37 kafka | sasl.login.refresh.min.period.seconds = 60 23:16:37 kafka | sasl.login.refresh.window.factor = 0.8 23:16:37 kafka | sasl.login.refresh.window.jitter = 0.05 23:16:37 kafka | sasl.login.retry.backoff.max.ms = 10000 23:16:37 kafka | sasl.login.retry.backoff.ms = 100 23:16:37 kafka | sasl.mechanism.controller.protocol = GSSAPI 23:16:37 kafka | sasl.mechanism.inter.broker.protocol = GSSAPI 23:16:37 kafka | sasl.oauthbearer.clock.skew.seconds = 30 23:16:37 kafka | sasl.oauthbearer.expected.audience = null 23:16:37 kafka | sasl.oauthbearer.expected.issuer = null 23:16:37 kafka | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:37 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:37 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:37 kafka | sasl.oauthbearer.jwks.endpoint.url = null 23:16:37 kafka | sasl.oauthbearer.scope.claim.name = scope 23:16:37 kafka | sasl.oauthbearer.sub.claim.name = sub 23:16:37 kafka | sasl.oauthbearer.token.endpoint.url = null 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.716331837Z level=info msg="Migration successfully executed" id="delete tags for deleted dashboards" duration=301.058µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.721099847Z level=info msg="Executing migration" id="delete stars for deleted dashboards" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.721437896Z level=info msg="Migration successfully executed" id="delete stars for deleted dashboards" duration=338.309µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.725135698Z level=info msg="Executing migration" id="Add index for dashboard_is_folder" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.726446563Z level=info msg="Migration successfully executed" id="Add index for dashboard_is_folder" duration=1.310475ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.730195055Z level=info msg="Executing migration" id="Add isPublic for dashboard" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.732273172Z level=info msg="Migration successfully executed" id="Add isPublic for dashboard" duration=2.077917ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.739778508Z level=info msg="Executing migration" id="create data_source table" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.741048582Z level=info msg="Migration successfully executed" id="create data_source table" duration=1.267854ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.74722752Z level=info msg="Executing migration" id="add index data_source.account_id" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.748042603Z level=info msg="Migration successfully executed" id="add index data_source.account_id" duration=815.123µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.75303245Z level=info msg="Executing migration" id="add unique index data_source.account_id_name" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.754232812Z level=info msg="Migration successfully executed" id="add unique index data_source.account_id_name" duration=1.200042ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.762161439Z level=info msg="Executing migration" id="drop index IDX_data_source_account_id - v1" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.762863468Z level=info msg="Migration successfully executed" id="drop index IDX_data_source_account_id - v1" duration=702.108µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.766584069Z level=info msg="Executing migration" id="drop index UQE_data_source_account_id_name - v1" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.768000378Z level=info msg="Migration successfully executed" id="drop index UQE_data_source_account_id_name - v1" duration=1.378718ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.774242788Z level=info msg="Executing migration" id="Rename table data_source to data_source_v1 - v1" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.781007903Z level=info msg="Migration successfully executed" id="Rename table data_source to data_source_v1 - v1" duration=6.768595ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.818164847Z level=info msg="Executing migration" id="create data_source table v2" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.820971014Z level=info msg="Migration successfully executed" id="create data_source table v2" duration=2.815277ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.830455342Z level=info msg="Executing migration" id="create index IDX_data_source_org_id - v2" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.831368977Z level=info msg="Migration successfully executed" id="create index IDX_data_source_org_id - v2" duration=913.055µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.838602505Z level=info msg="Executing migration" id="create index UQE_data_source_org_id_name - v2" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.839752136Z level=info msg="Migration successfully executed" id="create index UQE_data_source_org_id_name - v2" duration=1.149831ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.84391678Z level=info msg="Executing migration" id="Drop old table data_source_v1 #2" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.844839915Z level=info msg="Migration successfully executed" id="Drop old table data_source_v1 #2" duration=923.485µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.851472257Z level=info msg="Executing migration" id="Add column with_credentials" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.853603904Z level=info msg="Migration successfully executed" id="Add column with_credentials" duration=2.130727ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.856919865Z level=info msg="Executing migration" id="Add secure json data column" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.860097482Z level=info msg="Migration successfully executed" id="Add secure json data column" duration=3.176717ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.863737461Z level=info msg="Executing migration" id="Update data_source table charset" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.863836594Z level=info msg="Migration successfully executed" id="Update data_source table charset" duration=101.623µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.869255682Z level=info msg="Executing migration" id="Update initial version to 1" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.869585781Z level=info msg="Migration successfully executed" id="Update initial version to 1" duration=334.479µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.875705128Z level=info msg="Executing migration" id="Add read_only data column" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.879241165Z level=info msg="Migration successfully executed" id="Add read_only data column" duration=3.541058ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.883697146Z level=info msg="Executing migration" id="Migrate logging ds to loki ds" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.884037325Z level=info msg="Migration successfully executed" id="Migrate logging ds to loki ds" duration=339.959µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.887230082Z level=info msg="Executing migration" id="Update json_data with nulls" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.887391896Z level=info msg="Migration successfully executed" id="Update json_data with nulls" duration=161.834µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.894490291Z level=info msg="Executing migration" id="Add uid column" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.897272717Z level=info msg="Migration successfully executed" id="Add uid column" duration=2.781985ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.900207627Z level=info msg="Executing migration" id="Update uid value" 23:16:37 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_constraints (name VARCHAR(120) NULL, version VARCHAR(20) NULL, CONSTRAINTS VARCHAR(255) NULL) 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | > upgrade 0220-jpatoscadatatype_metadata.sql 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | > upgrade 0230-jpatoscadatatype_properties.sql 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | > upgrade 0240-jpatoscanodetemplate_metadata.sql 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetemplate_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | > upgrade 0250-jpatoscanodetemplate_properties.sql 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetemplate_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | > upgrade 0260-jpatoscanodetype_metadata.sql 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | > upgrade 0270-jpatoscanodetype_properties.sql 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | > upgrade 0280-jpatoscapolicy_metadata.sql 23:16:37 kafka | sasl.server.callback.handler.class = null 23:16:37 kafka | sasl.server.max.receive.size = 524288 23:16:37 kafka | security.inter.broker.protocol = PLAINTEXT 23:16:37 kafka | security.providers = null 23:16:37 kafka | server.max.startup.time.ms = 9223372036854775807 23:16:37 kafka | socket.connection.setup.timeout.max.ms = 30000 23:16:37 kafka | socket.connection.setup.timeout.ms = 10000 23:16:37 kafka | socket.listen.backlog.size = 50 23:16:37 kafka | socket.receive.buffer.bytes = 102400 23:16:37 kafka | socket.request.max.bytes = 104857600 23:16:37 kafka | socket.send.buffer.bytes = 102400 23:16:37 kafka | ssl.cipher.suites = [] 23:16:37 kafka | ssl.client.auth = none 23:16:37 kafka | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:37 kafka | ssl.endpoint.identification.algorithm = https 23:16:37 kafka | ssl.engine.factory.class = null 23:16:37 kafka | ssl.key.password = null 23:16:37 kafka | ssl.keymanager.algorithm = SunX509 23:16:37 kafka | ssl.keystore.certificate.chain = null 23:16:37 kafka | ssl.keystore.key = null 23:16:37 kafka | ssl.keystore.location = null 23:16:37 kafka | ssl.keystore.password = null 23:16:37 kafka | ssl.keystore.type = JKS 23:16:37 kafka | ssl.principal.mapping.rules = DEFAULT 23:16:37 kafka | ssl.protocol = TLSv1.3 23:16:37 kafka | ssl.provider = null 23:16:37 kafka | ssl.secure.random.implementation = null 23:16:37 kafka | ssl.trustmanager.algorithm = PKIX 23:16:37 kafka | ssl.truststore.certificates = null 23:16:37 kafka | ssl.truststore.location = null 23:16:37 kafka | ssl.truststore.password = null 23:16:37 kafka | ssl.truststore.type = JKS 23:16:37 kafka | transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000 23:16:37 kafka | transaction.max.timeout.ms = 900000 23:16:37 kafka | transaction.partition.verification.enable = true 23:16:37 kafka | transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 23:16:37 kafka | transaction.state.log.load.buffer.size = 5242880 23:16:37 kafka | transaction.state.log.min.isr = 2 23:16:37 kafka | transaction.state.log.num.partitions = 50 23:16:37 prometheus | ts=2024-04-10T23:14:03.907Z caller=main.go:573 level=info msg="No time or size retention was set so using the default time retention" duration=15d 23:16:37 prometheus | ts=2024-04-10T23:14:03.907Z caller=main.go:617 level=info msg="Starting Prometheus Server" mode=server version="(version=2.51.1, branch=HEAD, revision=855b5ac4b80956874eb1790a04c92327f2f99e38)" 23:16:37 prometheus | ts=2024-04-10T23:14:03.907Z caller=main.go:622 level=info build_context="(go=go1.22.1, platform=linux/amd64, user=root@d3785d7783f2, date=20240328-09:27:30, tags=netgo,builtinassets,stringlabels)" 23:16:37 prometheus | ts=2024-04-10T23:14:03.908Z caller=main.go:623 level=info host_details="(Linux 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 prometheus (none))" 23:16:37 prometheus | ts=2024-04-10T23:14:03.908Z caller=main.go:624 level=info fd_limits="(soft=1048576, hard=1048576)" 23:16:37 prometheus | ts=2024-04-10T23:14:03.908Z caller=main.go:625 level=info vm_limits="(soft=unlimited, hard=unlimited)" 23:16:37 prometheus | ts=2024-04-10T23:14:03.911Z caller=web.go:568 level=info component=web msg="Start listening for connections" address=0.0.0.0:9090 23:16:37 prometheus | ts=2024-04-10T23:14:03.912Z caller=main.go:1129 level=info msg="Starting TSDB ..." 23:16:37 prometheus | ts=2024-04-10T23:14:03.917Z caller=tls_config.go:313 level=info component=web msg="Listening on" address=[::]:9090 23:16:37 prometheus | ts=2024-04-10T23:14:03.917Z caller=tls_config.go:316 level=info component=web msg="TLS is disabled." http2=false address=[::]:9090 23:16:37 prometheus | ts=2024-04-10T23:14:03.925Z caller=head.go:616 level=info component=tsdb msg="Replaying on-disk memory mappable chunks if any" 23:16:37 prometheus | ts=2024-04-10T23:14:03.925Z caller=head.go:698 level=info component=tsdb msg="On-disk memory mappable chunks replay completed" duration=2.28µs 23:16:37 prometheus | ts=2024-04-10T23:14:03.925Z caller=head.go:706 level=info component=tsdb msg="Replaying WAL, this may take a while" 23:16:37 prometheus | ts=2024-04-10T23:14:03.925Z caller=head.go:778 level=info component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0 23:16:37 prometheus | ts=2024-04-10T23:14:03.925Z caller=head.go:815 level=info component=tsdb msg="WAL replay completed" checkpoint_replay_duration=24.1µs wal_replay_duration=266.777µs wbl_replay_duration=170ns total_replay_duration=314.308µs 23:16:37 prometheus | ts=2024-04-10T23:14:03.927Z caller=main.go:1150 level=info fs_type=EXT4_SUPER_MAGIC 23:16:37 prometheus | ts=2024-04-10T23:14:03.927Z caller=main.go:1153 level=info msg="TSDB started" 23:16:37 prometheus | ts=2024-04-10T23:14:03.927Z caller=main.go:1335 level=info msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml 23:16:37 prometheus | ts=2024-04-10T23:14:03.929Z caller=main.go:1372 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=1.838036ms db_storage=1.82µs remote_storage=2.55µs web_handler=790ns query_engine=1.58µs scrape=514.414µs scrape_sd=156.774µs notify=29.791µs notify_sd=10.49µs rules=2.42µs tracing=5.75µs 23:16:37 prometheus | ts=2024-04-10T23:14:03.929Z caller=main.go:1114 level=info msg="Server is ready to receive web requests." 23:16:37 prometheus | ts=2024-04-10T23:14:03.929Z caller=manager.go:163 level=info component="rule manager" msg="Starting rule manager..." 23:16:37 policy-apex-pdp | mariadb (172.17.0.4:3306) open 23:16:37 policy-apex-pdp | kafka (172.17.0.6:9092) open 23:16:37 policy-apex-pdp | Waiting for mariadb port 3306... 23:16:37 policy-apex-pdp | Waiting for kafka port 9092... 23:16:37 policy-apex-pdp | Waiting for pap port 6969... 23:16:37 policy-apex-pdp | pap (172.17.0.9:6969) open 23:16:37 policy-apex-pdp | apexApps.sh: running application 'onappf' with command 'java -Dlogback.configurationFile=/opt/app/policy/apex-pdp/etc/logback.xml -cp /opt/app/policy/apex-pdp/etc:/opt/app/policy/apex-pdp/etc/hazelcast:/opt/app/policy/apex-pdp/etc/infinispan:/opt/app/policy/apex-pdp/lib/* -Djavax.net.ssl.keyStore=/opt/app/policy/apex-pdp/etc/ssl/policy-keystore -Djavax.net.ssl.keyStorePassword=Pol1cy_0nap -Djavax.net.ssl.trustStore=/opt/app/policy/apex-pdp/etc/ssl/policy-truststore -Djavax.net.ssl.trustStorePassword=Pol1cy_0nap -Dlogback.configurationFile=/opt/app/policy/apex-pdp/etc/logback.xml -Dhazelcast.config=/opt/app/policy/apex-pdp/etc/hazelcast.xml -Dhazelcast.mancenter.enabled=false org.onap.policy.apex.services.onappf.ApexStarterMain -c /opt/app/policy/apex-pdp/etc/onappf/config/OnapPfConfig.json' 23:16:37 policy-apex-pdp | [2024-04-10T23:14:41.027+00:00|INFO|ApexStarterMain|main] In ApexStarter with parameters [-c, /opt/app/policy/apex-pdp/etc/onappf/config/OnapPfConfig.json] 23:16:37 policy-apex-pdp | [2024-04-10T23:14:41.218+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 23:16:37 policy-apex-pdp | allow.auto.create.topics = true 23:16:37 policy-apex-pdp | auto.commit.interval.ms = 5000 23:16:37 policy-apex-pdp | auto.include.jmx.reporter = true 23:16:37 policy-apex-pdp | auto.offset.reset = latest 23:16:37 policy-apex-pdp | bootstrap.servers = [kafka:9092] 23:16:37 policy-apex-pdp | check.crcs = true 23:16:37 policy-apex-pdp | client.dns.lookup = use_all_dns_ips 23:16:37 policy-apex-pdp | client.id = consumer-8c9f1915-d141-4575-8b29-0255c152ac0a-1 23:16:37 policy-apex-pdp | client.rack = 23:16:37 policy-apex-pdp | connections.max.idle.ms = 540000 23:16:37 policy-apex-pdp | default.api.timeout.ms = 60000 23:16:37 policy-apex-pdp | enable.auto.commit = true 23:16:37 policy-apex-pdp | exclude.internal.topics = true 23:16:37 policy-apex-pdp | fetch.max.bytes = 52428800 23:16:37 policy-apex-pdp | fetch.max.wait.ms = 500 23:16:37 policy-apex-pdp | fetch.min.bytes = 1 23:16:37 policy-apex-pdp | group.id = 8c9f1915-d141-4575-8b29-0255c152ac0a 23:16:37 policy-apex-pdp | group.instance.id = null 23:16:37 policy-apex-pdp | heartbeat.interval.ms = 3000 23:16:37 policy-apex-pdp | interceptor.classes = [] 23:16:37 policy-apex-pdp | internal.leave.group.on.close = true 23:16:37 policy-apex-pdp | internal.throw.on.fetch.stable.offset.unsupported = false 23:16:37 policy-apex-pdp | isolation.level = read_uncommitted 23:16:37 policy-apex-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:37 policy-apex-pdp | max.partition.fetch.bytes = 1048576 23:16:37 policy-apex-pdp | max.poll.interval.ms = 300000 23:16:37 policy-apex-pdp | max.poll.records = 500 23:16:37 policy-apex-pdp | metadata.max.age.ms = 300000 23:16:37 policy-apex-pdp | metric.reporters = [] 23:16:37 policy-apex-pdp | metrics.num.samples = 2 23:16:37 policy-apex-pdp | metrics.recording.level = INFO 23:16:37 policy-apex-pdp | metrics.sample.window.ms = 30000 23:16:37 policy-apex-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 23:16:37 policy-apex-pdp | receive.buffer.bytes = 65536 23:16:37 policy-apex-pdp | reconnect.backoff.max.ms = 1000 23:16:37 policy-apex-pdp | reconnect.backoff.ms = 50 23:16:37 policy-apex-pdp | request.timeout.ms = 30000 23:16:37 policy-apex-pdp | retry.backoff.ms = 100 23:16:37 policy-apex-pdp | sasl.client.callback.handler.class = null 23:16:37 policy-apex-pdp | sasl.jaas.config = null 23:16:37 policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:37 policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 23:16:37 policy-apex-pdp | sasl.kerberos.service.name = null 23:16:37 policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:37 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:37 policy-apex-pdp | sasl.login.callback.handler.class = null 23:16:37 policy-apex-pdp | sasl.login.class = null 23:16:37 policy-apex-pdp | sasl.login.connect.timeout.ms = null 23:16:37 policy-apex-pdp | sasl.login.read.timeout.ms = null 23:16:37 policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 23:16:37 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 23:16:37 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 23:16:37 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 23:16:37 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 23:16:37 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 23:16:37 policy-apex-pdp | sasl.mechanism = GSSAPI 23:16:37 policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 23:16:37 policy-apex-pdp | sasl.oauthbearer.expected.audience = null 23:16:37 policy-apex-pdp | sasl.oauthbearer.expected.issuer = null 23:16:37 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:37 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:37 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:37 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null 23:16:37 policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope 23:16:37 policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub 23:16:37 policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null 23:16:37 policy-apex-pdp | security.protocol = PLAINTEXT 23:16:37 policy-apex-pdp | security.providers = null 23:16:37 policy-apex-pdp | send.buffer.bytes = 131072 23:16:37 policy-apex-pdp | session.timeout.ms = 45000 23:16:37 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 23:16:37 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 23:16:37 policy-apex-pdp | ssl.cipher.suites = null 23:16:37 policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:37 policy-apex-pdp | ssl.endpoint.identification.algorithm = https 23:16:37 policy-apex-pdp | ssl.engine.factory.class = null 23:16:37 policy-apex-pdp | ssl.key.password = null 23:16:37 policy-apex-pdp | ssl.keymanager.algorithm = SunX509 23:16:37 policy-apex-pdp | ssl.keystore.certificate.chain = null 23:16:37 policy-apex-pdp | ssl.keystore.key = null 23:16:37 policy-apex-pdp | ssl.keystore.location = null 23:16:37 policy-apex-pdp | ssl.keystore.password = null 23:16:37 policy-apex-pdp | ssl.keystore.type = JKS 23:16:37 policy-apex-pdp | ssl.protocol = TLSv1.3 23:16:37 policy-apex-pdp | ssl.provider = null 23:16:37 policy-apex-pdp | ssl.secure.random.implementation = null 23:16:37 policy-apex-pdp | ssl.trustmanager.algorithm = PKIX 23:16:37 policy-apex-pdp | ssl.truststore.certificates = null 23:16:37 policy-apex-pdp | ssl.truststore.location = null 23:16:37 policy-apex-pdp | ssl.truststore.password = null 23:16:37 policy-apex-pdp | ssl.truststore.type = JKS 23:16:37 policy-apex-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:37 policy-apex-pdp | 23:16:37 policy-apex-pdp | [2024-04-10T23:14:41.404+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 23:16:37 policy-apex-pdp | [2024-04-10T23:14:41.404+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 23:16:37 policy-apex-pdp | [2024-04-10T23:14:41.404+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1712790881403 23:16:37 policy-apex-pdp | [2024-04-10T23:14:41.407+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-8c9f1915-d141-4575-8b29-0255c152ac0a-1, groupId=8c9f1915-d141-4575-8b29-0255c152ac0a] Subscribed to topic(s): policy-pdp-pap 23:16:37 policy-apex-pdp | [2024-04-10T23:14:41.420+00:00|INFO|ServiceManager|main] service manager starting 23:16:37 policy-apex-pdp | [2024-04-10T23:14:41.420+00:00|INFO|ServiceManager|main] service manager starting topics 23:16:37 policy-apex-pdp | [2024-04-10T23:14:41.427+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=8c9f1915-d141-4575-8b29-0255c152ac0a, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: starting 23:16:37 policy-apex-pdp | [2024-04-10T23:14:41.450+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 23:16:37 policy-apex-pdp | allow.auto.create.topics = true 23:16:37 policy-apex-pdp | auto.commit.interval.ms = 5000 23:16:37 policy-apex-pdp | auto.include.jmx.reporter = true 23:16:37 policy-apex-pdp | auto.offset.reset = latest 23:16:37 policy-apex-pdp | bootstrap.servers = [kafka:9092] 23:16:37 policy-apex-pdp | check.crcs = true 23:16:37 policy-apex-pdp | client.dns.lookup = use_all_dns_ips 23:16:37 policy-apex-pdp | client.id = consumer-8c9f1915-d141-4575-8b29-0255c152ac0a-2 23:16:37 policy-apex-pdp | client.rack = 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.900750001Z level=info msg="Migration successfully executed" id="Update uid value" duration=541.614µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.903918738Z level=info msg="Executing migration" id="Add unique index datasource_org_id_uid" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.905015998Z level=info msg="Migration successfully executed" id="Add unique index datasource_org_id_uid" duration=1.09712ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.912098571Z level=info msg="Executing migration" id="add unique index datasource_org_id_is_default" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.913672264Z level=info msg="Migration successfully executed" id="add unique index datasource_org_id_is_default" duration=1.573543ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.916988045Z level=info msg="Executing migration" id="create api_key table" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.917866708Z level=info msg="Migration successfully executed" id="create api_key table" duration=878.143µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.920993894Z level=info msg="Executing migration" id="add index api_key.account_id" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.921904489Z level=info msg="Migration successfully executed" id="add index api_key.account_id" duration=910.455µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.927485442Z level=info msg="Executing migration" id="add index api_key.key" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.928483078Z level=info msg="Migration successfully executed" id="add index api_key.key" duration=997.037µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.931971883Z level=info msg="Executing migration" id="add index api_key.account_id_name" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.93291391Z level=info msg="Migration successfully executed" id="add index api_key.account_id_name" duration=941.927µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.936529238Z level=info msg="Executing migration" id="drop index IDX_api_key_account_id - v1" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.93734215Z level=info msg="Migration successfully executed" id="drop index IDX_api_key_account_id - v1" duration=812.962µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.943794757Z level=info msg="Executing migration" id="drop index UQE_api_key_key - v1" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.944403753Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_key - v1" duration=609.266µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.947614551Z level=info msg="Executing migration" id="drop index UQE_api_key_account_id_name - v1" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.948205107Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_account_id_name - v1" duration=590.726µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.951344902Z level=info msg="Executing migration" id="Rename table api_key to api_key_v1 - v1" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.957777539Z level=info msg="Migration successfully executed" id="Rename table api_key to api_key_v1 - v1" duration=6.431777ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.963745631Z level=info msg="Executing migration" id="create api_key table v2" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.964349638Z level=info msg="Migration successfully executed" id="create api_key table v2" duration=603.047µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.967323659Z level=info msg="Executing migration" id="create index IDX_api_key_org_id - v2" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.968058669Z level=info msg="Migration successfully executed" id="create index IDX_api_key_org_id - v2" duration=734.54µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.973478367Z level=info msg="Executing migration" id="create index UQE_api_key_key - v2" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.974282849Z level=info msg="Migration successfully executed" id="create index UQE_api_key_key - v2" duration=804.392µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.977510257Z level=info msg="Executing migration" id="create index UQE_api_key_org_id_name - v2" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.97834178Z level=info msg="Migration successfully executed" id="create index UQE_api_key_org_id_name - v2" duration=831.303µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.981690251Z level=info msg="Executing migration" id="copy api_key v1 to v2" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.982121972Z level=info msg="Migration successfully executed" id="copy api_key v1 to v2" duration=431.091µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.985292789Z level=info msg="Executing migration" id="Drop old table api_key_v1" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.985919277Z level=info msg="Migration successfully executed" id="Drop old table api_key_v1" duration=626.128µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.99116402Z level=info msg="Executing migration" id="Update api_key table charset" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.991210171Z level=info msg="Migration successfully executed" id="Update api_key table charset" duration=46.841µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.995600251Z level=info msg="Executing migration" id="Add expires to api_key table" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:05.997442271Z level=info msg="Migration successfully executed" id="Add expires to api_key table" duration=1.84168ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.000989888Z level=info msg="Executing migration" id="Add service account foreign key" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.004537334Z level=info msg="Migration successfully executed" id="Add service account foreign key" duration=3.546096ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.011116764Z level=info msg="Executing migration" id="set service account foreign key to nil if 0" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.011292739Z level=info msg="Migration successfully executed" id="set service account foreign key to nil if 0" duration=176.215µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.015182326Z level=info msg="Executing migration" id="Add last_used_at to api_key table" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.017745511Z level=info msg="Migration successfully executed" id="Add last_used_at to api_key table" duration=2.562625ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.02088283Z level=info msg="Executing migration" id="Add is_revoked column to api_key table" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.023525256Z level=info msg="Migration successfully executed" id="Add is_revoked column to api_key table" duration=2.641856ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.030081821Z level=info msg="Executing migration" id="create dashboard_snapshot table v4" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.030931583Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v4" duration=849.032µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.034503662Z level=info msg="Executing migration" id="drop table dashboard_snapshot_v4 #1" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.035282682Z level=info msg="Migration successfully executed" id="drop table dashboard_snapshot_v4 #1" duration=778.58µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.039101128Z level=info msg="Executing migration" id="create dashboard_snapshot table v5 #2" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.040187875Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v5 #2" duration=1.113587ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.046156815Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_key - v5" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.047054838Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_key - v5" duration=897.782µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.050494455Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_delete_key - v5" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.051420798Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_delete_key - v5" duration=926.203µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.05471155Z level=info msg="Executing migration" id="create index IDX_dashboard_snapshot_user_id - v5" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.055604004Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_snapshot_user_id - v5" duration=892.143µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.062170388Z level=info msg="Executing migration" id="alter dashboard_snapshot to mediumtext v2" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.062401894Z level=info msg="Migration successfully executed" id="alter dashboard_snapshot to mediumtext v2" duration=236.356µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.065950303Z level=info msg="Executing migration" id="Update dashboard_snapshot table charset" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.066012575Z level=info msg="Migration successfully executed" id="Update dashboard_snapshot table charset" duration=63.812µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.068722533Z level=info msg="Executing migration" id="Add column external_delete_url to dashboard_snapshots table" 23:16:37 policy-api | Waiting for mariadb port 3306... 23:16:37 policy-api | mariadb (172.17.0.4:3306) open 23:16:37 policy-api | Waiting for policy-db-migrator port 6824... 23:16:37 policy-api | policy-db-migrator (172.17.0.7:6824) open 23:16:37 policy-api | Policy api config file: /opt/app/policy/api/etc/apiParameters.yaml 23:16:37 policy-api | 23:16:37 policy-api | . ____ _ __ _ _ 23:16:37 policy-api | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ 23:16:37 policy-api | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ 23:16:37 policy-api | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) 23:16:37 policy-api | ' |____| .__|_| |_|_| |_\__, | / / / / 23:16:37 policy-api | =========|_|==============|___/=/_/_/_/ 23:16:37 policy-api | :: Spring Boot :: (v3.1.8) 23:16:37 policy-api | 23:16:37 policy-api | [2024-04-10T23:14:15.149+00:00|INFO|PolicyApiApplication|main] Starting PolicyApiApplication using Java 17.0.10 with PID 22 (/app/api.jar started by policy in /opt/app/policy/api/bin) 23:16:37 policy-api | [2024-04-10T23:14:15.152+00:00|INFO|PolicyApiApplication|main] No active profile set, falling back to 1 default profile: "default" 23:16:37 policy-api | [2024-04-10T23:14:17.076+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. 23:16:37 policy-api | [2024-04-10T23:14:17.178+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 91 ms. Found 6 JPA repository interfaces. 23:16:37 policy-api | [2024-04-10T23:14:17.653+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.api.main.exception.ServiceExceptionHandler 23:16:37 policy-api | [2024-04-10T23:14:17.654+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.api.main.exception.ServiceExceptionHandler 23:16:37 policy-api | [2024-04-10T23:14:18.361+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port(s): 6969 (http) 23:16:37 policy-api | [2024-04-10T23:14:18.373+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] 23:16:37 policy-api | [2024-04-10T23:14:18.376+00:00|INFO|StandardService|main] Starting service [Tomcat] 23:16:37 policy-api | [2024-04-10T23:14:18.376+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.18] 23:16:37 policy-api | [2024-04-10T23:14:18.482+00:00|INFO|[/policy/api/v1]|main] Initializing Spring embedded WebApplicationContext 23:16:37 policy-api | [2024-04-10T23:14:18.482+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 3248 ms 23:16:37 policy-api | [2024-04-10T23:14:18.969+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] 23:16:37 policy-api | [2024-04-10T23:14:19.058+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.3.0.CR1 23:16:37 policy-api | [2024-04-10T23:14:19.063+00:00|INFO|Environment|main] HHH000406: Using bytecode reflection optimizer 23:16:37 policy-api | [2024-04-10T23:14:19.119+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled 23:16:37 policy-api | [2024-04-10T23:14:19.495+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer 23:16:37 policy-api | [2024-04-10T23:14:19.518+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... 23:16:37 policy-api | [2024-04-10T23:14:19.629+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.mariadb.jdbc.Connection@2620e717 23:16:37 policy-api | [2024-04-10T23:14:19.632+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. 23:16:37 policy-api | [2024-04-10T23:14:21.766+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) 23:16:37 policy-api | [2024-04-10T23:14:21.771+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' 23:16:37 policy-api | [2024-04-10T23:14:22.946+00:00|WARN|ApiDatabaseInitializer|main] Detected multi-versioned type: policytypes/onap.policies.monitoring.tcagen2.v2.yaml 23:16:37 policy-api | [2024-04-10T23:14:23.828+00:00|INFO|ApiDatabaseInitializer|main] Multi-versioned Service Template [onap.policies.Monitoring, onap.policies.monitoring.tcagen2] 23:16:37 policy-api | [2024-04-10T23:14:25.064+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning 23:16:37 policy-api | [2024-04-10T23:14:25.292+00:00|INFO|DefaultSecurityFilterChain|main] Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@4b7feb38, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@6e04275e, org.springframework.security.web.context.SecurityContextHolderFilter@54e1e8a7, org.springframework.security.web.header.HeaderWriterFilter@547a79cd, org.springframework.security.web.authentication.logout.LogoutFilter@4bbb00a4, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@1e33203f, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@276961df, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@dcaa0e8, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@700f356b, org.springframework.security.web.access.ExceptionTranslationFilter@2542d320, org.springframework.security.web.access.intercept.AuthorizationFilter@2986e26f] 23:16:37 policy-api | [2024-04-10T23:14:26.146+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoint(s) beneath base path '' 23:16:37 policy-api | [2024-04-10T23:14:26.256+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] 23:16:37 policy-api | [2024-04-10T23:14:26.286+00:00|INFO|TomcatWebServer|main] Tomcat started on port(s): 6969 (http) with context path '/policy/api/v1' 23:16:37 policy-api | [2024-04-10T23:14:26.311+00:00|INFO|PolicyApiApplication|main] Started PolicyApiApplication in 11.973 seconds (process running for 12.644) 23:16:37 policy-api | [2024-04-10T23:14:39.930+00:00|INFO|[/policy/api/v1]|http-nio-6969-exec-3] Initializing Spring DispatcherServlet 'dispatcherServlet' 23:16:37 policy-api | [2024-04-10T23:14:39.930+00:00|INFO|DispatcherServlet|http-nio-6969-exec-3] Initializing Servlet 'dispatcherServlet' 23:16:37 policy-api | [2024-04-10T23:14:39.932+00:00|INFO|DispatcherServlet|http-nio-6969-exec-3] Completed initialization in 2 ms 23:16:37 policy-api | [2024-04-10T23:14:44.746+00:00|INFO|OrderedServiceImpl|http-nio-6969-exec-2] ***** OrderedServiceImpl implementers: 23:16:37 policy-api | [] 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | > upgrade 0290-jpatoscapolicy_properties.sql 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | > upgrade 0300-jpatoscapolicy_targets.sql 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_targets (name VARCHAR(120) NULL, version VARCHAR(20) NULL) 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | > upgrade 0310-jpatoscapolicytype_metadata.sql 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | > upgrade 0320-jpatoscapolicytype_properties.sql 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | > upgrade 0330-jpatoscapolicytype_targets.sql 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_targets (name VARCHAR(120) NULL, version VARCHAR(20) NULL) 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | > upgrade 0340-jpatoscapolicytype_triggers.sql 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_triggers (name VARCHAR(120) NULL, version VARCHAR(20) NULL, TRIGGERS VARCHAR(255) NULL) 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | > upgrade 0350-jpatoscaproperty_constraints.sql 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaproperty_constraints (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, CONSTRAINTS VARCHAR(255) NULL) 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | > upgrade 0360-jpatoscaproperty_metadata.sql 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaproperty_metadata (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | > upgrade 0370-jpatoscarelationshiptype_metadata.sql 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarelationshiptype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | > upgrade 0380-jpatoscarelationshiptype_properties.sql 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarelationshiptype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | > upgrade 0390-jpatoscarequirement_metadata.sql 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | > upgrade 0400-jpatoscarequirement_occurrences.sql 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_occurrences (name VARCHAR(120) NULL, version VARCHAR(20) NULL, OCCURRENCES INT DEFAULT NULL) 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | > upgrade 0410-jpatoscarequirement_properties.sql 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | > upgrade 0420-jpatoscaservicetemplate_metadata.sql 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaservicetemplate_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | > upgrade 0430-jpatoscatopologytemplate_inputs.sql 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscatopologytemplate_inputs (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, INPUTS LONGBLOB DEFAULT NULL, INPUTS_KEY VARCHAR(255) NULL) 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | > upgrade 0440-pdpgroup_pdpsubgroup.sql 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpgroup_pdpsubgroup (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPGROUP_PDPSUBGROUP (name, version, parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | > upgrade 0450-pdpgroup.sql 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpgroup (`DESCRIPTION` VARCHAR(255) NULL, PDPGROUPSTATE INT DEFAULT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_PDPGROUP (name, version)) 23:16:37 policy-pap | Waiting for mariadb port 3306... 23:16:37 policy-pap | mariadb (172.17.0.4:3306) open 23:16:37 policy-pap | Waiting for kafka port 9092... 23:16:37 policy-pap | kafka (172.17.0.6:9092) open 23:16:37 policy-pap | Waiting for api port 6969... 23:16:37 policy-pap | api (172.17.0.8:6969) open 23:16:37 policy-pap | Policy pap config file: /opt/app/policy/pap/etc/papParameters.yaml 23:16:37 policy-pap | PDP group configuration file: /opt/app/policy/pap/etc/mounted/groups.json 23:16:37 policy-pap | 23:16:37 policy-pap | . ____ _ __ _ _ 23:16:37 policy-pap | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ 23:16:37 policy-pap | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ 23:16:37 policy-pap | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) 23:16:37 policy-pap | ' |____| .__|_| |_|_| |_\__, | / / / / 23:16:37 policy-pap | =========|_|==============|___/=/_/_/_/ 23:16:37 policy-pap | :: Spring Boot :: (v3.1.8) 23:16:37 policy-pap | 23:16:37 policy-pap | [2024-04-10T23:14:28.617+00:00|INFO|PolicyPapApplication|main] Starting PolicyPapApplication using Java 17.0.10 with PID 36 (/app/pap.jar started by policy in /opt/app/policy/pap/bin) 23:16:37 policy-pap | [2024-04-10T23:14:28.619+00:00|INFO|PolicyPapApplication|main] No active profile set, falling back to 1 default profile: "default" 23:16:37 policy-pap | [2024-04-10T23:14:30.673+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. 23:16:37 policy-pap | [2024-04-10T23:14:30.808+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 123 ms. Found 7 JPA repository interfaces. 23:16:37 policy-pap | [2024-04-10T23:14:31.260+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.exception.ServiceExceptionHandler 23:16:37 policy-pap | [2024-04-10T23:14:31.260+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.exception.ServiceExceptionHandler 23:16:37 policy-pap | [2024-04-10T23:14:32.058+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port(s): 6969 (http) 23:16:37 policy-pap | [2024-04-10T23:14:32.070+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] 23:16:37 policy-pap | [2024-04-10T23:14:32.072+00:00|INFO|StandardService|main] Starting service [Tomcat] 23:16:37 policy-pap | [2024-04-10T23:14:32.072+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.18] 23:16:37 policy-pap | [2024-04-10T23:14:32.183+00:00|INFO|[/policy/pap/v1]|main] Initializing Spring embedded WebApplicationContext 23:16:37 policy-pap | [2024-04-10T23:14:32.183+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 3470 ms 23:16:37 policy-pap | [2024-04-10T23:14:32.696+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] 23:16:37 policy-pap | [2024-04-10T23:14:32.795+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.3.0.CR1 23:16:37 policy-pap | [2024-04-10T23:14:32.799+00:00|INFO|Environment|main] HHH000406: Using bytecode reflection optimizer 23:16:37 policy-pap | [2024-04-10T23:14:32.847+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled 23:16:37 policy-pap | [2024-04-10T23:14:33.223+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer 23:16:37 policy-pap | [2024-04-10T23:14:33.247+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... 23:16:37 policy-pap | [2024-04-10T23:14:33.389+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.mariadb.jdbc.Connection@2def7a7a 23:16:37 policy-pap | [2024-04-10T23:14:33.391+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | > upgrade 0460-pdppolicystatus.sql 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdppolicystatus (DEPLOY BOOLEAN DEFAULT 0, PDPGROUP VARCHAR(255) DEFAULT NULL, PDPTYPE VARCHAR(255) DEFAULT NULL, STATE INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PRIMARY KEY PK_PDPPOLICYSTATUS (parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | > upgrade 0470-pdp.sql 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdp (HEALTHY INT DEFAULT NULL, MESSAGE VARCHAR(255) DEFAULT NULL, PDPSTATE INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDP (parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | > upgrade 0480-pdpstatistics.sql 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpstatistics (PDPGROUPNAME VARCHAR(120) NULL, PDPSUBGROUPNAME VARCHAR(120) NULL, POLICYDEPLOYCOUNT BIGINT DEFAULT NULL, POLICYDEPLOYFAILCOUNT BIGINT DEFAULT NULL, POLICYDEPLOYSUCCESSCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDFAILCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDSUCCESSCOUNT BIGINT DEFAULT NULL, timeStamp datetime NOT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_PDPSTATISTICS (timeStamp, name, version)) 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | > upgrade 0490-pdpsubgroup_pdp.sql 23:16:37 policy-db-migrator | -------------- 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.07179027Z level=info msg="Migration successfully executed" id="Add column external_delete_url to dashboard_snapshots table" duration=3.068647ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.074866557Z level=info msg="Executing migration" id="Add encrypted dashboard json column" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.077595376Z level=info msg="Migration successfully executed" id="Add encrypted dashboard json column" duration=2.728299ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.082024308Z level=info msg="Executing migration" id="Change dashboard_encrypted column to MEDIUMBLOB" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.08213492Z level=info msg="Migration successfully executed" id="Change dashboard_encrypted column to MEDIUMBLOB" duration=110.082µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.084815298Z level=info msg="Executing migration" id="create quota table v1" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.08570435Z level=info msg="Migration successfully executed" id="create quota table v1" duration=863.281µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.089452275Z level=info msg="Executing migration" id="create index UQE_quota_org_id_user_id_target - v1" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.090783328Z level=info msg="Migration successfully executed" id="create index UQE_quota_org_id_user_id_target - v1" duration=1.328453ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.097703782Z level=info msg="Executing migration" id="Update quota table charset" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.097764854Z level=info msg="Migration successfully executed" id="Update quota table charset" duration=62.102µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.10437118Z level=info msg="Executing migration" id="create plugin_setting table" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.105262712Z level=info msg="Migration successfully executed" id="create plugin_setting table" duration=891.382µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.108903244Z level=info msg="Executing migration" id="create index UQE_plugin_setting_org_id_plugin_id - v1" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.109880308Z level=info msg="Migration successfully executed" id="create index UQE_plugin_setting_org_id_plugin_id - v1" duration=977.234µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.114388872Z level=info msg="Executing migration" id="Add column plugin_version to plugin_settings" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.117303565Z level=info msg="Migration successfully executed" id="Add column plugin_version to plugin_settings" duration=2.915683ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.120784733Z level=info msg="Executing migration" id="Update plugin_setting table charset" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.120809933Z level=info msg="Migration successfully executed" id="Update plugin_setting table charset" duration=26.07µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.124713881Z level=info msg="Executing migration" id="create session table" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.125663186Z level=info msg="Migration successfully executed" id="create session table" duration=948.805µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.130195569Z level=info msg="Executing migration" id="Drop old table playlist table" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.130457866Z level=info msg="Migration successfully executed" id="Drop old table playlist table" duration=259.356µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.134557009Z level=info msg="Executing migration" id="Drop old table playlist_item table" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.134775345Z level=info msg="Migration successfully executed" id="Drop old table playlist_item table" duration=217.265µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.138796565Z level=info msg="Executing migration" id="create playlist table v2" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.139588136Z level=info msg="Migration successfully executed" id="create playlist table v2" duration=790.681µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.179309955Z level=info msg="Executing migration" id="create playlist item table v2" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.180637268Z level=info msg="Migration successfully executed" id="create playlist item table v2" duration=1.326903ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.187579803Z level=info msg="Executing migration" id="Update playlist table charset" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.187606704Z level=info msg="Migration successfully executed" id="Update playlist table charset" duration=28.04µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.19106664Z level=info msg="Executing migration" id="Update playlist_item table charset" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.191093821Z level=info msg="Migration successfully executed" id="Update playlist_item table charset" duration=27.791µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.194928487Z level=info msg="Executing migration" id="Add playlist column created_at" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.199753749Z level=info msg="Migration successfully executed" id="Add playlist column created_at" duration=4.826212ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.203584105Z level=info msg="Executing migration" id="Add playlist column updated_at" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.206667042Z level=info msg="Migration successfully executed" id="Add playlist column updated_at" duration=3.080087ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.21172518Z level=info msg="Executing migration" id="drop preferences table v2" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.211828912Z level=info msg="Migration successfully executed" id="drop preferences table v2" duration=104.013µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.215664899Z level=info msg="Executing migration" id="drop preferences table v3" 23:16:37 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpsubgroup_pdp (pdpParentKeyName VARCHAR(120) NOT NULL, pdpParentKeyVersion VARCHAR(15) NOT NULL, pdpParentLocalName VARCHAR(120) NOT NULL, pdpLocalName VARCHAR(120) NOT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPSUBGROUP_PDP (pdpParentKeyName, pdpParentKeyVersion, pdpParentLocalName, pdpLocalName, parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | > upgrade 0500-pdpsubgroup.sql 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpsubgroup (CURRENTINSTANCECOUNT INT DEFAULT NULL, DESIREDINSTANCECOUNT INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPSUBGROUP (parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | > upgrade 0510-toscacapabilityassignment.sql 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignment (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENT(name, version)) 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | > upgrade 0520-toscacapabilityassignments.sql 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignments (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENTS (name, version)) 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | > upgrade 0530-toscacapabilityassignments_toscacapabilityassignment.sql 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignments_toscacapabilityassignment (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENTS_TOSCACAPABILITYASSIGNMENT (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | > upgrade 0540-toscacapabilitytype.sql 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPE (name, version)) 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | > upgrade 0550-toscacapabilitytypes.sql 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPES (name, version)) 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | > upgrade 0560-toscacapabilitytypes_toscacapabilitytype.sql 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytypes_toscacapabilitytype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPES_TOSCACAPABILITYTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | > upgrade 0570-toscadatatype.sql 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCADATATYPE (name, version)) 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | > upgrade 0580-toscadatatypes.sql 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCADATATYPES (name, version)) 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | > upgrade 0590-toscadatatypes_toscadatatype.sql 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatypes_toscadatatype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCADATATYPES_TOSCADATATYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | > upgrade 0600-toscanodetemplate.sql 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplate (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, capabilitiesName VARCHAR(120) NULL, capabilitiesVersion VARCHAR(20) NULL, requirementsVersion VARCHAR(20) NULL, requirementsName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCANODETEMPLATE (name, version)) 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | > upgrade 0610-toscanodetemplates.sql 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplates (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCANODETEMPLATES (name, version)) 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | > upgrade 0620-toscanodetemplates_toscanodetemplate.sql 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplates_toscanodetemplate (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCANODETEMPLATES_TOSCANODETEMPLATE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | > upgrade 0630-toscanodetype.sql 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, requirementsVersion VARCHAR(20) NULL, requirementsName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCANODETYPE (name, version)) 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | > upgrade 0640-toscanodetypes.sql 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCANODETYPES (name, version)) 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | > upgrade 0650-toscanodetypes_toscanodetype.sql 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetypes_toscanodetype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCANODETYPES_TOSCANODETYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | > upgrade 0660-toscaparameter.sql 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.215771432Z level=info msg="Migration successfully executed" id="drop preferences table v3" duration=107.443µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.219506915Z level=info msg="Executing migration" id="create preferences table v3" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.220931752Z level=info msg="Migration successfully executed" id="create preferences table v3" duration=1.423337ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.22605315Z level=info msg="Executing migration" id="Update preferences table charset" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.226113711Z level=info msg="Migration successfully executed" id="Update preferences table charset" duration=61.601µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.229957029Z level=info msg="Executing migration" id="Add column team_id in preferences" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.233056386Z level=info msg="Migration successfully executed" id="Add column team_id in preferences" duration=3.098877ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.236734699Z level=info msg="Executing migration" id="Update team_id column values in preferences" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.236957924Z level=info msg="Migration successfully executed" id="Update team_id column values in preferences" duration=222.975µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.240684329Z level=info msg="Executing migration" id="Add column week_start in preferences" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.244777101Z level=info msg="Migration successfully executed" id="Add column week_start in preferences" duration=4.092212ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.249452059Z level=info msg="Executing migration" id="Add column preferences.json_data" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.252544347Z level=info msg="Migration successfully executed" id="Add column preferences.json_data" duration=3.090848ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.256316361Z level=info msg="Executing migration" id="alter preferences.json_data to mediumtext v1" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.256398773Z level=info msg="Migration successfully executed" id="alter preferences.json_data to mediumtext v1" duration=82.952µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.260744653Z level=info msg="Executing migration" id="Add preferences index org_id" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.26265835Z level=info msg="Migration successfully executed" id="Add preferences index org_id" duration=1.914127ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.26855679Z level=info msg="Executing migration" id="Add preferences index user_id" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.270124179Z level=info msg="Migration successfully executed" id="Add preferences index user_id" duration=1.565218ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.274785136Z level=info msg="Executing migration" id="create alert table v1" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.277109435Z level=info msg="Migration successfully executed" id="create alert table v1" duration=2.324819ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.281750931Z level=info msg="Executing migration" id="add index alert org_id & id " 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.283432374Z level=info msg="Migration successfully executed" id="add index alert org_id & id " duration=1.678242ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.289056455Z level=info msg="Executing migration" id="add index alert state" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.290035609Z level=info msg="Migration successfully executed" id="add index alert state" duration=978.764µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.294239755Z level=info msg="Executing migration" id="add index alert dashboard_id" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.295168549Z level=info msg="Migration successfully executed" id="add index alert dashboard_id" duration=928.553µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.300123753Z level=info msg="Executing migration" id="Create alert_rule_tag table v1" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.300878002Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v1" duration=753.549µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.30515302Z level=info msg="Executing migration" id="Add unique index alert_rule_tag.alert_id_tag_id" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.306518964Z level=info msg="Migration successfully executed" id="Add unique index alert_rule_tag.alert_id_tag_id" duration=1.364114ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.311024158Z level=info msg="Executing migration" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.312695329Z level=info msg="Migration successfully executed" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" duration=1.669201ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.317815838Z level=info msg="Executing migration" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.327249616Z level=info msg="Migration successfully executed" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" duration=9.430988ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.331085862Z level=info msg="Executing migration" id="Create alert_rule_tag table v2" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.331880172Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v2" duration=796µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.335234146Z level=info msg="Executing migration" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.336204021Z level=info msg="Migration successfully executed" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" duration=969.295µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.341633128Z level=info msg="Executing migration" id="copy alert_rule_tag v1 to v2" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.342265793Z level=info msg="Migration successfully executed" id="copy alert_rule_tag v1 to v2" duration=632.696µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.346499329Z level=info msg="Executing migration" id="drop table alert_rule_tag_v1" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.347447374Z level=info msg="Migration successfully executed" id="drop table alert_rule_tag_v1" duration=947.515µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.35128453Z level=info msg="Executing migration" id="create alert_notification table v1" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.352120381Z level=info msg="Migration successfully executed" id="create alert_notification table v1" duration=835.331µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.35686205Z level=info msg="Executing migration" id="Add column is_default" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.362098232Z level=info msg="Migration successfully executed" id="Add column is_default" duration=5.234072ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.366276317Z level=info msg="Executing migration" id="Add column frequency" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.3699416Z level=info msg="Migration successfully executed" id="Add column frequency" duration=3.666072ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.37355275Z level=info msg="Executing migration" id="Add column send_reminder" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.377056158Z level=info msg="Migration successfully executed" id="Add column send_reminder" duration=3.502828ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.380609628Z level=info msg="Executing migration" id="Add column disable_resolve_message" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.384112186Z level=info msg="Migration successfully executed" id="Add column disable_resolve_message" duration=3.502008ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.388824484Z level=info msg="Executing migration" id="add index alert_notification org_id & name" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.389793829Z level=info msg="Migration successfully executed" id="add index alert_notification org_id & name" duration=967.615µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.393940453Z level=info msg="Executing migration" id="Update alert table charset" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.393970523Z level=info msg="Migration successfully executed" id="Update alert table charset" duration=31.12µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.397432961Z level=info msg="Executing migration" id="Update alert_notification table charset" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.397505113Z level=info msg="Migration successfully executed" id="Update alert_notification table charset" duration=73.762µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.40258145Z level=info msg="Executing migration" id="create notification_journal table v1" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.403853472Z level=info msg="Migration successfully executed" id="create notification_journal table v1" duration=1.269002ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.408317514Z level=info msg="Executing migration" id="add index notification_journal org_id & alert_id & notifier_id" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.409944615Z level=info msg="Migration successfully executed" id="add index notification_journal org_id & alert_id & notifier_id" duration=1.626371ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.415473584Z level=info msg="Executing migration" id="drop alert_notification_journal" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.416281004Z level=info msg="Migration successfully executed" id="drop alert_notification_journal" duration=806.76µs 23:16:37 kafka | transaction.state.log.replication.factor = 3 23:16:37 kafka | transaction.state.log.segment.bytes = 104857600 23:16:37 kafka | transactional.id.expiration.ms = 604800000 23:16:37 kafka | unclean.leader.election.enable = false 23:16:37 kafka | unstable.api.versions.enable = false 23:16:37 kafka | zookeeper.clientCnxnSocket = null 23:16:37 kafka | zookeeper.connect = zookeeper:2181 23:16:37 kafka | zookeeper.connection.timeout.ms = null 23:16:37 kafka | zookeeper.max.in.flight.requests = 10 23:16:37 kafka | zookeeper.metadata.migration.enable = false 23:16:37 kafka | zookeeper.session.timeout.ms = 18000 23:16:37 kafka | zookeeper.set.acl = false 23:16:37 kafka | zookeeper.ssl.cipher.suites = null 23:16:37 kafka | zookeeper.ssl.client.enable = false 23:16:37 kafka | zookeeper.ssl.crl.enable = false 23:16:37 kafka | zookeeper.ssl.enabled.protocols = null 23:16:37 kafka | zookeeper.ssl.endpoint.identification.algorithm = HTTPS 23:16:37 kafka | zookeeper.ssl.keystore.location = null 23:16:37 kafka | zookeeper.ssl.keystore.password = null 23:16:37 kafka | zookeeper.ssl.keystore.type = null 23:16:37 policy-db-migrator | -------------- 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.421507876Z level=info msg="Executing migration" id="create alert_notification_state table v1" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.422863711Z level=info msg="Migration successfully executed" id="create alert_notification_state table v1" duration=1.353635ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.426931363Z level=info msg="Executing migration" id="add index alert_notification_state org_id & alert_id & notifier_id" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.428952264Z level=info msg="Migration successfully executed" id="add index alert_notification_state org_id & alert_id & notifier_id" duration=2.020082ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.432560694Z level=info msg="Executing migration" id="Add for to alert table" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.43676746Z level=info msg="Migration successfully executed" id="Add for to alert table" duration=4.207656ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.443285734Z level=info msg="Executing migration" id="Add column uid in alert_notification" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.446931385Z level=info msg="Migration successfully executed" id="Add column uid in alert_notification" duration=3.647631ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.45028143Z level=info msg="Executing migration" id="Update uid column values in alert_notification" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.450485225Z level=info msg="Migration successfully executed" id="Update uid column values in alert_notification" duration=203.646µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.452893845Z level=info msg="Executing migration" id="Add unique index alert_notification_org_id_uid" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.453916442Z level=info msg="Migration successfully executed" id="Add unique index alert_notification_org_id_uid" duration=1.021547ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.460196139Z level=info msg="Executing migration" id="Remove unique index org_id_name" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.461308627Z level=info msg="Migration successfully executed" id="Remove unique index org_id_name" duration=1.106448ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.464742844Z level=info msg="Executing migration" id="Add column secure_settings in alert_notification" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.470268493Z level=info msg="Migration successfully executed" id="Add column secure_settings in alert_notification" duration=5.526009ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.475195606Z level=info msg="Executing migration" id="alter alert.settings to mediumtext" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.475283288Z level=info msg="Migration successfully executed" id="alter alert.settings to mediumtext" duration=88.322µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.480932581Z level=info msg="Executing migration" id="Add non-unique index alert_notification_state_alert_id" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.481857034Z level=info msg="Migration successfully executed" id="Add non-unique index alert_notification_state_alert_id" duration=923.834µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.48528465Z level=info msg="Executing migration" id="Add non-unique index alert_rule_tag_alert_id" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.486592603Z level=info msg="Migration successfully executed" id="Add non-unique index alert_rule_tag_alert_id" duration=1.306323ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.490647585Z level=info msg="Executing migration" id="Drop old annotation table v4" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.490810429Z level=info msg="Migration successfully executed" id="Drop old annotation table v4" duration=163.134µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.496250666Z level=info msg="Executing migration" id="create annotation table v5" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.49721863Z level=info msg="Migration successfully executed" id="create annotation table v5" duration=967.754µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.548244604Z level=info msg="Executing migration" id="add index annotation 0 v3" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.549699251Z level=info msg="Migration successfully executed" id="add index annotation 0 v3" duration=1.453847ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.553962897Z level=info msg="Executing migration" id="add index annotation 1 v3" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.555350513Z level=info msg="Migration successfully executed" id="add index annotation 1 v3" duration=1.390156ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.561700692Z level=info msg="Executing migration" id="add index annotation 2 v3" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.563120248Z level=info msg="Migration successfully executed" id="add index annotation 2 v3" duration=1.419016ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.567299703Z level=info msg="Executing migration" id="add index annotation 3 v3" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.568898084Z level=info msg="Migration successfully executed" id="add index annotation 3 v3" duration=1.597571ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.572939445Z level=info msg="Executing migration" id="add index annotation 4 v3" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.573948211Z level=info msg="Migration successfully executed" id="add index annotation 4 v3" duration=1.008396ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.580255109Z level=info msg="Executing migration" id="Update annotation table charset" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.580326871Z level=info msg="Migration successfully executed" id="Update annotation table charset" duration=72.652µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.583902271Z level=info msg="Executing migration" id="Add column region_id to annotation table" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.590083906Z level=info msg="Migration successfully executed" id="Add column region_id to annotation table" duration=6.183855ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.593919923Z level=info msg="Executing migration" id="Drop category_id index" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.594710763Z level=info msg="Migration successfully executed" id="Drop category_id index" duration=790.11µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.60056743Z level=info msg="Executing migration" id="Add column tags to annotation table" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.606883518Z level=info msg="Migration successfully executed" id="Add column tags to annotation table" duration=6.314568ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.610333476Z level=info msg="Executing migration" id="Create annotation_tag table v2" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.610982552Z level=info msg="Migration successfully executed" id="Create annotation_tag table v2" duration=648.386µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.614247234Z level=info msg="Executing migration" id="Add unique index annotation_tag.annotation_id_tag_id" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.615098485Z level=info msg="Migration successfully executed" id="Add unique index annotation_tag.annotation_id_tag_id" duration=851.861µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.621857536Z level=info msg="Executing migration" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.623134317Z level=info msg="Migration successfully executed" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" duration=1.276511ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.626749838Z level=info msg="Executing migration" id="Rename table annotation_tag to annotation_tag_v2 - v2" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.640517064Z level=info msg="Migration successfully executed" id="Rename table annotation_tag to annotation_tag_v2 - v2" duration=13.767526ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.643987222Z level=info msg="Executing migration" id="Create annotation_tag table v3" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.64468947Z level=info msg="Migration successfully executed" id="Create annotation_tag table v3" duration=701.288µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.651550642Z level=info msg="Executing migration" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.652831115Z level=info msg="Migration successfully executed" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" duration=1.278733ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.656966118Z level=info msg="Executing migration" id="copy annotation_tag v2 to v3" 23:16:37 kafka | zookeeper.ssl.ocsp.enable = false 23:16:37 kafka | zookeeper.ssl.protocol = TLSv1.2 23:16:37 kafka | zookeeper.ssl.truststore.location = null 23:16:37 kafka | zookeeper.ssl.truststore.password = null 23:16:37 kafka | zookeeper.ssl.truststore.type = null 23:16:37 kafka | (kafka.server.KafkaConfig) 23:16:37 kafka | [2024-04-10 23:14:05,022] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 23:16:37 kafka | [2024-04-10 23:14:05,026] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 23:16:37 kafka | [2024-04-10 23:14:05,028] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 23:16:37 kafka | [2024-04-10 23:14:05,035] INFO [ThrottledChannelReaper-ControllerMutation]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 23:16:37 kafka | [2024-04-10 23:14:05,065] INFO Loading logs from log dirs ArraySeq(/var/lib/kafka/data) (kafka.log.LogManager) 23:16:37 kafka | [2024-04-10 23:14:05,073] INFO No logs found to be loaded in /var/lib/kafka/data (kafka.log.LogManager) 23:16:37 kafka | [2024-04-10 23:14:05,081] INFO Loaded 0 logs in 15ms (kafka.log.LogManager) 23:16:37 kafka | [2024-04-10 23:14:05,082] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager) 23:16:37 kafka | [2024-04-10 23:14:05,083] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager) 23:16:37 kafka | [2024-04-10 23:14:05,095] INFO Starting the log cleaner (kafka.log.LogCleaner) 23:16:37 kafka | [2024-04-10 23:14:05,140] INFO [kafka-log-cleaner-thread-0]: Starting (kafka.log.LogCleaner$CleanerThread) 23:16:37 kafka | [2024-04-10 23:14:05,191] INFO [feature-zk-node-event-process-thread]: Starting (kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread) 23:16:37 kafka | [2024-04-10 23:14:05,205] INFO Feature ZK node at path: /feature does not exist (kafka.server.FinalizedFeatureChangeListener) 23:16:37 kafka | [2024-04-10 23:14:05,230] INFO [zk-broker-1-to-controller-forwarding-channel-manager]: Starting (kafka.server.BrokerToControllerRequestThread) 23:16:37 kafka | [2024-04-10 23:14:05,552] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) 23:16:37 kafka | [2024-04-10 23:14:05,571] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT) (kafka.network.SocketServer) 23:16:37 kafka | [2024-04-10 23:14:05,571] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) 23:16:37 kafka | [2024-04-10 23:14:05,576] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT_HOST) (kafka.network.SocketServer) 23:16:37 kafka | [2024-04-10 23:14:05,581] INFO [zk-broker-1-to-controller-alter-partition-channel-manager]: Starting (kafka.server.BrokerToControllerRequestThread) 23:16:37 kafka | [2024-04-10 23:14:05,609] INFO [ExpirationReaper-1-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:16:37 kafka | [2024-04-10 23:14:05,611] INFO [ExpirationReaper-1-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:16:37 kafka | [2024-04-10 23:14:05,612] INFO [ExpirationReaper-1-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:16:37 kafka | [2024-04-10 23:14:05,614] INFO [ExpirationReaper-1-ElectLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:16:37 kafka | [2024-04-10 23:14:05,619] INFO [ExpirationReaper-1-RemoteFetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:16:37 kafka | [2024-04-10 23:14:05,629] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler) 23:16:37 kafka | [2024-04-10 23:14:05,630] INFO [AddPartitionsToTxnSenderThread-1]: Starting (kafka.server.AddPartitionsToTxnManager) 23:16:37 kafka | [2024-04-10 23:14:05,653] INFO Creating /brokers/ids/1 (is it secure? false) (kafka.zk.KafkaZkClient) 23:16:37 kafka | [2024-04-10 23:14:05,675] INFO Stat of the created znode at /brokers/ids/1 is: 27,27,1712790845665,1712790845665,1,0,0,72057608973713409,258,0,27 23:16:37 kafka | (kafka.zk.KafkaZkClient) 23:16:37 kafka | [2024-04-10 23:14:05,676] INFO Registered broker 1 at path /brokers/ids/1 with addresses: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092, czxid (broker epoch): 27 (kafka.zk.KafkaZkClient) 23:16:37 kafka | [2024-04-10 23:14:05,729] INFO [ControllerEventThread controllerId=1] Starting (kafka.controller.ControllerEventManager$ControllerEventThread) 23:16:37 kafka | [2024-04-10 23:14:05,735] INFO [ExpirationReaper-1-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:16:37 kafka | [2024-04-10 23:14:05,741] INFO [ExpirationReaper-1-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:16:37 kafka | [2024-04-10 23:14:05,742] INFO [ExpirationReaper-1-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:16:37 kafka | [2024-04-10 23:14:05,745] INFO Successfully created /controller_epoch with initial epoch 0 (kafka.zk.KafkaZkClient) 23:16:37 kafka | [2024-04-10 23:14:05,755] INFO [GroupCoordinator 1]: Starting up. (kafka.coordinator.group.GroupCoordinator) 23:16:37 kafka | [2024-04-10 23:14:05,760] INFO [GroupCoordinator 1]: Startup complete. (kafka.coordinator.group.GroupCoordinator) 23:16:37 kafka | [2024-04-10 23:14:05,760] INFO [Controller id=1] 1 successfully elected as the controller. Epoch incremented to 1 and epoch zk version is now 1 (kafka.controller.KafkaController) 23:16:37 kafka | [2024-04-10 23:14:05,766] INFO [Controller id=1] Creating FeatureZNode at path: /feature with contents: FeatureZNode(2,Enabled,Map()) (kafka.controller.KafkaController) 23:16:37 kafka | [2024-04-10 23:14:05,769] INFO Feature ZK node created at path: /feature (kafka.server.FinalizedFeatureChangeListener) 23:16:37 kafka | [2024-04-10 23:14:05,789] INFO [TransactionCoordinator id=1] Starting up. (kafka.coordinator.transaction.TransactionCoordinator) 23:16:37 kafka | [2024-04-10 23:14:05,792] INFO [TransactionCoordinator id=1] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator) 23:16:37 kafka | [2024-04-10 23:14:05,793] INFO [TxnMarkerSenderThread-1]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager) 23:16:37 kafka | [2024-04-10 23:14:05,804] INFO [MetadataCache brokerId=1] Updated cache from existing None to latest Features(version=3.6-IV2, finalizedFeatures={}, finalizedFeaturesEpoch=0). (kafka.server.metadata.ZkMetadataCache) 23:16:37 kafka | [2024-04-10 23:14:05,804] INFO [Controller id=1] Registering handlers (kafka.controller.KafkaController) 23:16:37 kafka | [2024-04-10 23:14:05,814] INFO [Controller id=1] Deleting log dir event notifications (kafka.controller.KafkaController) 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.657416019Z level=info msg="Migration successfully executed" id="copy annotation_tag v2 to v3" duration=450.001µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.660733194Z level=info msg="Executing migration" id="drop table annotation_tag_v2" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.661245926Z level=info msg="Migration successfully executed" id="drop table annotation_tag_v2" duration=512.462µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.666891138Z level=info msg="Executing migration" id="Update alert annotations and set TEXT to empty" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.667057332Z level=info msg="Migration successfully executed" id="Update alert annotations and set TEXT to empty" duration=167.494µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.670184431Z level=info msg="Executing migration" id="Add created time to annotation table" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.676469569Z level=info msg="Migration successfully executed" id="Add created time to annotation table" duration=6.284378ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.680084569Z level=info msg="Executing migration" id="Add updated time to annotation table" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.684020999Z level=info msg="Migration successfully executed" id="Add updated time to annotation table" duration=3.93572ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.689547738Z level=info msg="Executing migration" id="Add index for created in annotation table" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.690385809Z level=info msg="Migration successfully executed" id="Add index for created in annotation table" duration=837.561µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.693861766Z level=info msg="Executing migration" id="Add index for updated in annotation table" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.694700498Z level=info msg="Migration successfully executed" id="Add index for updated in annotation table" duration=833.672µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.698702208Z level=info msg="Executing migration" id="Convert existing annotations from seconds to milliseconds" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.699048807Z level=info msg="Migration successfully executed" id="Convert existing annotations from seconds to milliseconds" duration=346.999µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.702679759Z level=info msg="Executing migration" id="Add epoch_end column" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.706769531Z level=info msg="Migration successfully executed" id="Add epoch_end column" duration=4.091082ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.712122986Z level=info msg="Executing migration" id="Add index for epoch_end" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.712990587Z level=info msg="Migration successfully executed" id="Add index for epoch_end" duration=866.871µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.716694491Z level=info msg="Executing migration" id="Make epoch_end the same as epoch" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.716852955Z level=info msg="Migration successfully executed" id="Make epoch_end the same as epoch" duration=158.204µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.720030925Z level=info msg="Executing migration" id="Move region to single row" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.720612859Z level=info msg="Migration successfully executed" id="Move region to single row" duration=581.264µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.727794459Z level=info msg="Executing migration" id="Remove index org_id_epoch from annotation table" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.729040111Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch from annotation table" duration=1.242782ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.733684318Z level=info msg="Executing migration" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.73495108Z level=info msg="Migration successfully executed" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" duration=1.272442ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.741514385Z level=info msg="Executing migration" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.742669424Z level=info msg="Migration successfully executed" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" duration=1.151779ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.747379482Z level=info msg="Executing migration" id="Add index for org_id_epoch_end_epoch on annotation table" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.748627654Z level=info msg="Migration successfully executed" id="Add index for org_id_epoch_end_epoch on annotation table" duration=1.250712ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.753799204Z level=info msg="Executing migration" id="Remove index org_id_epoch_epoch_end from annotation table" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.754962953Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch_epoch_end from annotation table" duration=1.164199ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.759172359Z level=info msg="Executing migration" id="Add index for alert_id on annotation table" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.760220545Z level=info msg="Migration successfully executed" id="Add index for alert_id on annotation table" duration=1.047516ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.763533899Z level=info msg="Executing migration" id="Increase tags column to length 4096" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.763723323Z level=info msg="Migration successfully executed" id="Increase tags column to length 4096" duration=189.334µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.77031336Z level=info msg="Executing migration" id="create test_data table" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.77195583Z level=info msg="Migration successfully executed" id="create test_data table" duration=1.64203ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.777216093Z level=info msg="Executing migration" id="create dashboard_version table v1" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.778365732Z level=info msg="Migration successfully executed" id="create dashboard_version table v1" duration=1.149129ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.782924136Z level=info msg="Executing migration" id="add index dashboard_version.dashboard_id" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.783997564Z level=info msg="Migration successfully executed" id="add index dashboard_version.dashboard_id" duration=1.073088ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.788002134Z level=info msg="Executing migration" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" 23:16:37 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaparameter (VALUE VARCHAR(255) NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPARAMETER (parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | > upgrade 0670-toscapolicies.sql 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicies (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICIES (name, version)) 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | > upgrade 0680-toscapolicies_toscapolicy.sql 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicies_toscapolicy (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPOLICIES_TOSCAPOLICY (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | > upgrade 0690-toscapolicy.sql 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicy (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCAPOLICY (name, version)) 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | > upgrade 0700-toscapolicytype.sql 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICYTYPE (name, version)) 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | > upgrade 0710-toscapolicytypes.sql 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICYTYPES (name, version)) 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | > upgrade 0720-toscapolicytypes_toscapolicytype.sql 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytypes_toscapolicytype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPOLICYTYPES_TOSCAPOLICYTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | > upgrade 0730-toscaproperty.sql 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaproperty (DEFAULTVALUE VARCHAR(255) DEFAULT NULL, `DESCRIPTION` VARCHAR(255) DEFAULT NULL, ENTRYSCHEMA LONGBLOB DEFAULT NULL, REQUIRED BOOLEAN DEFAULT 0, STATUS INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PRIMARY KEY PK_TOSCAPROPERTY (parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | > upgrade 0740-toscarelationshiptype.sql 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPE (name, version)) 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | > upgrade 0750-toscarelationshiptypes.sql 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPES (name, version)) 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | > upgrade 0760-toscarelationshiptypes_toscarelationshiptype.sql 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptypes_toscarelationshiptype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPES_TOSCARELATIONSHIPTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | > upgrade 0770-toscarequirement.sql 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirement (CAPABILITY VARCHAR(255) NULL, `DESCRIPTION` VARCHAR(255) NULL, NODE VARCHAR(255) NULL, RELATIONSHIP VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCAREQUIREMENT (name, version)) 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | > upgrade 0780-toscarequirements.sql 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirements (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAREQUIREMENTS (name, version)) 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | > upgrade 0790-toscarequirements_toscarequirement.sql 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirements_toscarequirement (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAREQUIREMENTS_TOSCAREQUIREMENT (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | > upgrade 0800-toscaservicetemplate.sql 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaservicetemplate (`DESCRIPTION` VARCHAR(255) NULL, TOSCADEFINITIONSVERSION VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, capabilityTypesVersion VARCHAR(20) NULL, capabilityTypesName VARCHAR(120) NULL, dataTypesName VARCHAR(120) NULL, dataTypesVersion VARCHAR(20) NULL, nodeTypesVersion VARCHAR(20) NULL, nodeTypesName VARCHAR(120) NULL, policyTypesName VARCHAR(120) NULL, policyTypesVersion VARCHAR(20) NULL, relationshipTypesVersion VARCHAR(20) NULL, relationshipTypesName VARCHAR(120) NULL, topologyTemplateLocalName VARCHAR(120) NULL, topologyTemplateParentKeyName VARCHAR(120) NULL, topologyTemplateParentKeyVersion VARCHAR(15) NULL, topologyTemplateParentLocalName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCASERVICETEMPLATE (name, version)) 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | > upgrade 0810-toscatopologytemplate.sql 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscatopologytemplate (`description` VARCHAR(255) NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, nodeTemplatessVersion VARCHAR(20) NULL, nodeTemplatesName VARCHAR(120) NULL, policyVersion VARCHAR(20) NULL, policyName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCATOPOLOGYTEMPLATE (parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.788778724Z level=info msg="Migration successfully executed" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" duration=776.109µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.794280022Z level=info msg="Executing migration" id="Set dashboard version to 1 where 0" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.794757434Z level=info msg="Migration successfully executed" id="Set dashboard version to 1 where 0" duration=476.682µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.80056392Z level=info msg="Executing migration" id="save existing dashboard data in dashboard_version table v1" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.8013556Z level=info msg="Migration successfully executed" id="save existing dashboard data in dashboard_version table v1" duration=791.9µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.805013272Z level=info msg="Executing migration" id="alter dashboard_version.data to mediumtext v1" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.805445373Z level=info msg="Migration successfully executed" id="alter dashboard_version.data to mediumtext v1" duration=431.771µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.81130506Z level=info msg="Executing migration" id="create team table" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.812782098Z level=info msg="Migration successfully executed" id="create team table" duration=1.476908ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.816903121Z level=info msg="Executing migration" id="add index team.org_id" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.818014189Z level=info msg="Migration successfully executed" id="add index team.org_id" duration=1.113688ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.825383174Z level=info msg="Executing migration" id="add unique index team_org_id_name" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.827006665Z level=info msg="Migration successfully executed" id="add unique index team_org_id_name" duration=1.622941ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.834017611Z level=info msg="Executing migration" id="Add column uid in team" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.83870787Z level=info msg="Migration successfully executed" id="Add column uid in team" duration=4.689558ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.842545376Z level=info msg="Executing migration" id="Update uid column values in team" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.842911576Z level=info msg="Migration successfully executed" id="Update uid column values in team" duration=365.33µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.846403563Z level=info msg="Executing migration" id="Add unique index team_org_id_uid" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.847532431Z level=info msg="Migration successfully executed" id="Add unique index team_org_id_uid" duration=1.128488ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.851322837Z level=info msg="Executing migration" id="create team member table" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.851940983Z level=info msg="Migration successfully executed" id="create team member table" duration=617.786µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.856970479Z level=info msg="Executing migration" id="add index team_member.org_id" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.858645571Z level=info msg="Migration successfully executed" id="add index team_member.org_id" duration=1.674782ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.862864777Z level=info msg="Executing migration" id="add unique index team_member_org_id_team_id_user_id" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.864555389Z level=info msg="Migration successfully executed" id="add unique index team_member_org_id_team_id_user_id" duration=1.689712ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.868753306Z level=info msg="Executing migration" id="add index team_member.team_id" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.870011017Z level=info msg="Migration successfully executed" id="add index team_member.team_id" duration=1.255202ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.875672649Z level=info msg="Executing migration" id="Add column email to team table" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.88046663Z level=info msg="Migration successfully executed" id="Add column email to team table" duration=4.792131ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.886994554Z level=info msg="Executing migration" id="Add column external to team_member table" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.89161068Z level=info msg="Migration successfully executed" id="Add column external to team_member table" duration=4.615576ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.894927913Z level=info msg="Executing migration" id="Add column permission to team_member table" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.899614992Z level=info msg="Migration successfully executed" id="Add column permission to team_member table" duration=4.686369ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.904545586Z level=info msg="Executing migration" id="create dashboard acl table" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.905575452Z level=info msg="Migration successfully executed" id="create dashboard acl table" duration=1.029055ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.909450239Z level=info msg="Executing migration" id="add index dashboard_acl_dashboard_id" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.910437334Z level=info msg="Migration successfully executed" id="add index dashboard_acl_dashboard_id" duration=986.785µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.914146627Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_user_id" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.915232374Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_user_id" duration=1.085377ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.922118338Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_team_id" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.923134483Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_team_id" duration=1.016135ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.927343649Z level=info msg="Executing migration" id="add index dashboard_acl_user_id" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.929497083Z level=info msg="Migration successfully executed" id="add index dashboard_acl_user_id" duration=2.153654ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.933946065Z level=info msg="Executing migration" id="add index dashboard_acl_team_id" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.935205857Z level=info msg="Migration successfully executed" id="add index dashboard_acl_team_id" duration=1.262712ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.940362816Z level=info msg="Executing migration" id="add index dashboard_acl_org_id_role" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.941380142Z level=info msg="Migration successfully executed" id="add index dashboard_acl_org_id_role" duration=1.017156ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.945022914Z level=info msg="Executing migration" id="add index dashboard_permission" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.946245134Z level=info msg="Migration successfully executed" id="add index dashboard_permission" duration=1.22359ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.949981499Z level=info msg="Executing migration" id="save default acl rules in dashboard_acl table" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.950544863Z level=info msg="Migration successfully executed" id="save default acl rules in dashboard_acl table" duration=561.144µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.955804635Z level=info msg="Executing migration" id="delete acl rules for deleted dashboards and folders" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.956223685Z level=info msg="Migration successfully executed" id="delete acl rules for deleted dashboards and folders" duration=418.74µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.959779814Z level=info msg="Executing migration" id="create tag table" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.960739859Z level=info msg="Migration successfully executed" id="create tag table" duration=959.835µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.966586846Z level=info msg="Executing migration" id="add index tag.key_value" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.967592761Z level=info msg="Migration successfully executed" id="add index tag.key_value" duration=1.005745ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.972006113Z level=info msg="Executing migration" id="create login attempt table" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.973333896Z level=info msg="Migration successfully executed" id="create login attempt table" duration=1.326712ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.977625154Z level=info msg="Executing migration" id="add index login_attempt.username" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.979232404Z level=info msg="Migration successfully executed" id="add index login_attempt.username" duration=1.60716ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.985089831Z level=info msg="Executing migration" id="drop index IDX_login_attempt_username - v1" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.986348493Z level=info msg="Migration successfully executed" id="drop index IDX_login_attempt_username - v1" duration=1.260142ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:06.990175099Z level=info msg="Executing migration" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.002290714Z level=info msg="Migration successfully executed" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" duration=12.116044ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.005631318Z level=info msg="Executing migration" id="create login_attempt v2" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.006311815Z level=info msg="Migration successfully executed" id="create login_attempt v2" duration=680.007µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.013593547Z level=info msg="Executing migration" id="create index IDX_login_attempt_username - v2" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.015798111Z level=info msg="Migration successfully executed" id="create index IDX_login_attempt_username - v2" duration=2.210005ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.020521779Z level=info msg="Executing migration" id="copy login_attempt v1 to v2" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.021162774Z level=info msg="Migration successfully executed" id="copy login_attempt v1 to v2" duration=640.635µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.024486547Z level=info msg="Executing migration" id="drop login_attempt_tmp_qwerty" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.025409471Z level=info msg="Migration successfully executed" id="drop login_attempt_tmp_qwerty" duration=922.324µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.030634011Z level=info msg="Executing migration" id="create user auth table" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.031512823Z level=info msg="Migration successfully executed" id="create user auth table" duration=878.572µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.03504208Z level=info msg="Executing migration" id="create index IDX_user_auth_auth_module_auth_id - v1" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.036128868Z level=info msg="Migration successfully executed" id="create index IDX_user_auth_auth_module_auth_id - v1" duration=1.086038ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.041770829Z level=info msg="Executing migration" id="alter user_auth.auth_id to length 190" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.041985154Z level=info msg="Migration successfully executed" id="alter user_auth.auth_id to length 190" duration=214.016µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.047635935Z level=info msg="Executing migration" id="Add OAuth access token to user_auth" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.054212008Z level=info msg="Migration successfully executed" id="Add OAuth access token to user_auth" duration=6.575783ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.05789603Z level=info msg="Executing migration" id="Add OAuth refresh token to user_auth" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.061608163Z level=info msg="Migration successfully executed" id="Add OAuth refresh token to user_auth" duration=3.711954ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.067585611Z level=info msg="Executing migration" id="Add OAuth token type to user_auth" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.072905224Z level=info msg="Migration successfully executed" id="Add OAuth token type to user_auth" duration=5.319043ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.078637597Z level=info msg="Executing migration" id="Add OAuth expiry to user_auth" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.083396155Z level=info msg="Migration successfully executed" id="Add OAuth expiry to user_auth" duration=4.758728ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.087127358Z level=info msg="Executing migration" id="Add index to user_id column in user_auth" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.087896937Z level=info msg="Migration successfully executed" id="Add index to user_id column in user_auth" duration=768.669µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.091886086Z level=info msg="Executing migration" id="Add OAuth ID token to user_auth" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.097093246Z level=info msg="Migration successfully executed" id="Add OAuth ID token to user_auth" duration=5.20676ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.100510742Z level=info msg="Executing migration" id="create server_lock table" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.101688041Z level=info msg="Migration successfully executed" id="create server_lock table" duration=1.162158ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.107198828Z level=info msg="Executing migration" id="add index server_lock.operation_uid" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.108265044Z level=info msg="Migration successfully executed" id="add index server_lock.operation_uid" duration=1.065876ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.113824583Z level=info msg="Executing migration" id="create user auth token table" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.114795318Z level=info msg="Migration successfully executed" id="create user auth token table" duration=967.115µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.121359891Z level=info msg="Executing migration" id="add unique index user_auth_token.auth_token" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.122395996Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.auth_token" duration=1.035825ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.126411237Z level=info msg="Executing migration" id="add unique index user_auth_token.prev_auth_token" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.128712435Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.prev_auth_token" duration=2.299057ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.133690418Z level=info msg="Executing migration" id="add index user_auth_token.user_id" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.134844477Z level=info msg="Migration successfully executed" id="add index user_auth_token.user_id" duration=1.152639ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.141225586Z level=info msg="Executing migration" id="Add revoked_at to the user auth token" 23:16:37 policy-pap | [2024-04-10T23:14:35.629+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) 23:16:37 policy-pap | [2024-04-10T23:14:35.645+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' 23:16:37 policy-pap | [2024-04-10T23:14:36.269+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PdpGroupRepository 23:16:37 policy-pap | [2024-04-10T23:14:36.713+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PolicyStatusRepository 23:16:37 policy-pap | [2024-04-10T23:14:36.850+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PolicyAuditRepository 23:16:37 policy-pap | [2024-04-10T23:14:37.200+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 23:16:37 policy-pap | allow.auto.create.topics = true 23:16:37 policy-pap | auto.commit.interval.ms = 5000 23:16:37 policy-pap | auto.include.jmx.reporter = true 23:16:37 policy-pap | auto.offset.reset = latest 23:16:37 policy-pap | bootstrap.servers = [kafka:9092] 23:16:37 policy-pap | check.crcs = true 23:16:37 policy-pap | client.dns.lookup = use_all_dns_ips 23:16:37 policy-pap | client.id = consumer-9f4a6b38-834c-48e5-bf2a-977246f9eaf0-1 23:16:37 policy-pap | client.rack = 23:16:37 policy-pap | connections.max.idle.ms = 540000 23:16:37 policy-pap | default.api.timeout.ms = 60000 23:16:37 policy-pap | enable.auto.commit = true 23:16:37 policy-pap | exclude.internal.topics = true 23:16:37 policy-pap | fetch.max.bytes = 52428800 23:16:37 policy-pap | fetch.max.wait.ms = 500 23:16:37 policy-pap | fetch.min.bytes = 1 23:16:37 policy-pap | group.id = 9f4a6b38-834c-48e5-bf2a-977246f9eaf0 23:16:37 policy-pap | group.instance.id = null 23:16:37 policy-pap | heartbeat.interval.ms = 3000 23:16:37 policy-pap | interceptor.classes = [] 23:16:37 policy-pap | internal.leave.group.on.close = true 23:16:37 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 23:16:37 policy-pap | isolation.level = read_uncommitted 23:16:37 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:37 policy-pap | max.partition.fetch.bytes = 1048576 23:16:37 policy-pap | max.poll.interval.ms = 300000 23:16:37 policy-pap | max.poll.records = 500 23:16:37 policy-pap | metadata.max.age.ms = 300000 23:16:37 policy-pap | metric.reporters = [] 23:16:37 policy-pap | metrics.num.samples = 2 23:16:37 policy-pap | metrics.recording.level = INFO 23:16:37 policy-pap | metrics.sample.window.ms = 30000 23:16:37 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 23:16:37 policy-pap | receive.buffer.bytes = 65536 23:16:37 policy-pap | reconnect.backoff.max.ms = 1000 23:16:37 policy-pap | reconnect.backoff.ms = 50 23:16:37 policy-pap | request.timeout.ms = 30000 23:16:37 policy-pap | retry.backoff.ms = 100 23:16:37 policy-pap | sasl.client.callback.handler.class = null 23:16:37 policy-pap | sasl.jaas.config = null 23:16:37 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:37 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 23:16:37 policy-pap | sasl.kerberos.service.name = null 23:16:37 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:37 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:37 policy-pap | sasl.login.callback.handler.class = null 23:16:37 policy-pap | sasl.login.class = null 23:16:37 policy-pap | sasl.login.connect.timeout.ms = null 23:16:37 policy-pap | sasl.login.read.timeout.ms = null 23:16:37 policy-pap | sasl.login.refresh.buffer.seconds = 300 23:16:37 policy-pap | sasl.login.refresh.min.period.seconds = 60 23:16:37 policy-pap | sasl.login.refresh.window.factor = 0.8 23:16:37 policy-pap | sasl.login.refresh.window.jitter = 0.05 23:16:37 policy-pap | sasl.login.retry.backoff.max.ms = 10000 23:16:37 policy-pap | sasl.login.retry.backoff.ms = 100 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.146703482Z level=info msg="Migration successfully executed" id="Add revoked_at to the user auth token" duration=5.477516ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.152855485Z level=info msg="Executing migration" id="add index user_auth_token.revoked_at" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.154267011Z level=info msg="Migration successfully executed" id="add index user_auth_token.revoked_at" duration=1.413766ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.159363878Z level=info msg="Executing migration" id="create cache_data table" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.160339512Z level=info msg="Migration successfully executed" id="create cache_data table" duration=974.804µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.165582683Z level=info msg="Executing migration" id="add unique index cache_data.cache_key" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.167141441Z level=info msg="Migration successfully executed" id="add unique index cache_data.cache_key" duration=1.556438ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.171820488Z level=info msg="Executing migration" id="create short_url table v1" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.173413778Z level=info msg="Migration successfully executed" id="create short_url table v1" duration=1.58962ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.17750712Z level=info msg="Executing migration" id="add index short_url.org_id-uid" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.178603897Z level=info msg="Migration successfully executed" id="add index short_url.org_id-uid" duration=1.096287ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.184304919Z level=info msg="Executing migration" id="alter table short_url alter column created_by type to bigint" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.184571026Z level=info msg="Migration successfully executed" id="alter table short_url alter column created_by type to bigint" duration=265.667µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.188539815Z level=info msg="Executing migration" id="delete alert_definition table" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.188817982Z level=info msg="Migration successfully executed" id="delete alert_definition table" duration=277.856µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.194740889Z level=info msg="Executing migration" id="recreate alert_definition table" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.196233187Z level=info msg="Migration successfully executed" id="recreate alert_definition table" duration=1.491487ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.219801333Z level=info msg="Executing migration" id="add index in alert_definition on org_id and title columns" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.222095611Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and title columns" duration=2.294048ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.227898695Z level=info msg="Executing migration" id="add index in alert_definition on org_id and uid columns" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.229307691Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and uid columns" duration=1.409336ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.233625498Z level=info msg="Executing migration" id="alter alert_definition table data column to mediumtext in mysql" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.233770581Z level=info msg="Migration successfully executed" id="alter alert_definition table data column to mediumtext in mysql" duration=144.463µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.237242518Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and title columns" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.238490649Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and title columns" duration=1.247831ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.24373471Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and uid columns" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.244781846Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and uid columns" duration=1.046616ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.250758645Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and title columns" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.251894533Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and title columns" duration=1.136369ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.25620358Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and uid columns" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.258043326Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and uid columns" duration=1.839206ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.262900798Z level=info msg="Executing migration" id="Add column paused in alert_definition" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.269575704Z level=info msg="Migration successfully executed" id="Add column paused in alert_definition" duration=6.674126ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.273567013Z level=info msg="Executing migration" id="drop alert_definition table" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.274559468Z level=info msg="Migration successfully executed" id="drop alert_definition table" duration=991.765µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.281611283Z level=info msg="Executing migration" id="delete alert_definition_version table" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.281840169Z level=info msg="Migration successfully executed" id="delete alert_definition_version table" duration=239.156µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.286139696Z level=info msg="Executing migration" id="recreate alert_definition_version table" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.287424068Z level=info msg="Migration successfully executed" id="recreate alert_definition_version table" duration=1.280502ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.292869384Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_id and version columns" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.294561546Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_id and version columns" duration=1.691382ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.301392366Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_uid and version columns" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.303027837Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_uid and version columns" duration=1.069497ms 23:16:37 policy-apex-pdp | connections.max.idle.ms = 540000 23:16:37 policy-apex-pdp | default.api.timeout.ms = 60000 23:16:37 policy-apex-pdp | enable.auto.commit = true 23:16:37 policy-apex-pdp | exclude.internal.topics = true 23:16:37 policy-apex-pdp | fetch.max.bytes = 52428800 23:16:37 policy-apex-pdp | fetch.max.wait.ms = 500 23:16:37 policy-apex-pdp | fetch.min.bytes = 1 23:16:37 policy-apex-pdp | group.id = 8c9f1915-d141-4575-8b29-0255c152ac0a 23:16:37 policy-apex-pdp | group.instance.id = null 23:16:37 policy-apex-pdp | heartbeat.interval.ms = 3000 23:16:37 policy-apex-pdp | interceptor.classes = [] 23:16:37 policy-apex-pdp | internal.leave.group.on.close = true 23:16:37 policy-apex-pdp | internal.throw.on.fetch.stable.offset.unsupported = false 23:16:37 policy-apex-pdp | isolation.level = read_uncommitted 23:16:37 policy-apex-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:37 policy-apex-pdp | max.partition.fetch.bytes = 1048576 23:16:37 policy-apex-pdp | max.poll.interval.ms = 300000 23:16:37 policy-apex-pdp | max.poll.records = 500 23:16:37 policy-apex-pdp | metadata.max.age.ms = 300000 23:16:37 policy-apex-pdp | metric.reporters = [] 23:16:37 policy-apex-pdp | metrics.num.samples = 2 23:16:37 policy-apex-pdp | metrics.recording.level = INFO 23:16:37 policy-apex-pdp | metrics.sample.window.ms = 30000 23:16:37 policy-apex-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 23:16:37 policy-apex-pdp | receive.buffer.bytes = 65536 23:16:37 policy-apex-pdp | reconnect.backoff.max.ms = 1000 23:16:37 policy-apex-pdp | reconnect.backoff.ms = 50 23:16:37 policy-apex-pdp | request.timeout.ms = 30000 23:16:37 policy-apex-pdp | retry.backoff.ms = 100 23:16:37 policy-apex-pdp | sasl.client.callback.handler.class = null 23:16:37 policy-apex-pdp | sasl.jaas.config = null 23:16:37 policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:37 policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 23:16:37 policy-apex-pdp | sasl.kerberos.service.name = null 23:16:37 policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:37 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:37 policy-apex-pdp | sasl.login.callback.handler.class = null 23:16:37 policy-apex-pdp | sasl.login.class = null 23:16:37 policy-apex-pdp | sasl.login.connect.timeout.ms = null 23:16:37 policy-apex-pdp | sasl.login.read.timeout.ms = null 23:16:37 policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 23:16:37 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 23:16:37 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 23:16:37 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 23:16:37 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 23:16:37 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 23:16:37 policy-apex-pdp | sasl.mechanism = GSSAPI 23:16:37 policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 23:16:37 policy-apex-pdp | sasl.oauthbearer.expected.audience = null 23:16:37 policy-apex-pdp | sasl.oauthbearer.expected.issuer = null 23:16:37 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:37 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:37 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:37 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null 23:16:37 policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope 23:16:37 policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub 23:16:37 policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null 23:16:37 policy-apex-pdp | security.protocol = PLAINTEXT 23:16:37 policy-apex-pdp | security.providers = null 23:16:37 policy-apex-pdp | send.buffer.bytes = 131072 23:16:37 policy-apex-pdp | session.timeout.ms = 45000 23:16:37 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 23:16:37 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 23:16:37 policy-apex-pdp | ssl.cipher.suites = null 23:16:37 policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:37 policy-apex-pdp | ssl.endpoint.identification.algorithm = https 23:16:37 policy-apex-pdp | ssl.engine.factory.class = null 23:16:37 policy-apex-pdp | ssl.key.password = null 23:16:37 policy-apex-pdp | ssl.keymanager.algorithm = SunX509 23:16:37 policy-apex-pdp | ssl.keystore.certificate.chain = null 23:16:37 policy-apex-pdp | ssl.keystore.key = null 23:16:37 policy-apex-pdp | ssl.keystore.location = null 23:16:37 policy-apex-pdp | ssl.keystore.password = null 23:16:37 policy-apex-pdp | ssl.keystore.type = JKS 23:16:37 policy-apex-pdp | ssl.protocol = TLSv1.3 23:16:37 policy-apex-pdp | ssl.provider = null 23:16:37 policy-apex-pdp | ssl.secure.random.implementation = null 23:16:37 policy-apex-pdp | ssl.trustmanager.algorithm = PKIX 23:16:37 policy-apex-pdp | ssl.truststore.certificates = null 23:16:37 policy-apex-pdp | ssl.truststore.location = null 23:16:37 policy-apex-pdp | ssl.truststore.password = null 23:16:37 policy-apex-pdp | ssl.truststore.type = JKS 23:16:37 policy-apex-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:37 policy-apex-pdp | 23:16:37 policy-apex-pdp | [2024-04-10T23:14:41.458+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 23:16:37 policy-apex-pdp | [2024-04-10T23:14:41.458+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 23:16:37 policy-apex-pdp | [2024-04-10T23:14:41.458+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1712790881458 23:16:37 policy-apex-pdp | [2024-04-10T23:14:41.458+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-8c9f1915-d141-4575-8b29-0255c152ac0a-2, groupId=8c9f1915-d141-4575-8b29-0255c152ac0a] Subscribed to topic(s): policy-pdp-pap 23:16:37 policy-apex-pdp | [2024-04-10T23:14:41.460+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=8efa43fc-60a3-4a91-96f7-185938e69330, alive=false, publisher=null]]: starting 23:16:37 policy-apex-pdp | [2024-04-10T23:14:41.475+00:00|INFO|ProducerConfig|main] ProducerConfig values: 23:16:37 policy-apex-pdp | acks = -1 23:16:37 policy-apex-pdp | auto.include.jmx.reporter = true 23:16:37 policy-apex-pdp | batch.size = 16384 23:16:37 policy-apex-pdp | bootstrap.servers = [kafka:9092] 23:16:37 policy-apex-pdp | buffer.memory = 33554432 23:16:37 policy-apex-pdp | client.dns.lookup = use_all_dns_ips 23:16:37 policy-apex-pdp | client.id = producer-1 23:16:37 policy-apex-pdp | compression.type = none 23:16:37 policy-apex-pdp | connections.max.idle.ms = 540000 23:16:37 policy-apex-pdp | delivery.timeout.ms = 120000 23:16:37 policy-apex-pdp | enable.idempotence = true 23:16:37 policy-apex-pdp | interceptor.classes = [] 23:16:37 policy-apex-pdp | key.serializer = class org.apache.kafka.common.serialization.StringSerializer 23:16:37 policy-apex-pdp | linger.ms = 0 23:16:37 policy-apex-pdp | max.block.ms = 60000 23:16:37 policy-apex-pdp | max.in.flight.requests.per.connection = 5 23:16:37 policy-apex-pdp | max.request.size = 1048576 23:16:37 policy-apex-pdp | metadata.max.age.ms = 300000 23:16:37 policy-apex-pdp | metadata.max.idle.ms = 300000 23:16:37 policy-apex-pdp | metric.reporters = [] 23:16:37 policy-apex-pdp | metrics.num.samples = 2 23:16:37 policy-apex-pdp | metrics.recording.level = INFO 23:16:37 policy-apex-pdp | metrics.sample.window.ms = 30000 23:16:37 policy-apex-pdp | partitioner.adaptive.partitioning.enable = true 23:16:37 policy-apex-pdp | partitioner.availability.timeout.ms = 0 23:16:37 policy-apex-pdp | partitioner.class = null 23:16:37 policy-apex-pdp | partitioner.ignore.keys = false 23:16:37 policy-apex-pdp | receive.buffer.bytes = 32768 23:16:37 policy-apex-pdp | reconnect.backoff.max.ms = 1000 23:16:37 policy-apex-pdp | reconnect.backoff.ms = 50 23:16:37 policy-apex-pdp | request.timeout.ms = 30000 23:16:37 policy-apex-pdp | retries = 2147483647 23:16:37 policy-apex-pdp | retry.backoff.ms = 100 23:16:37 policy-apex-pdp | sasl.client.callback.handler.class = null 23:16:37 policy-apex-pdp | sasl.jaas.config = null 23:16:37 policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:37 policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 23:16:37 policy-apex-pdp | sasl.kerberos.service.name = null 23:16:37 policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:37 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:37 policy-apex-pdp | sasl.login.callback.handler.class = null 23:16:37 policy-apex-pdp | sasl.login.class = null 23:16:37 policy-apex-pdp | sasl.login.connect.timeout.ms = null 23:16:37 policy-apex-pdp | sasl.login.read.timeout.ms = null 23:16:37 policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 23:16:37 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 23:16:37 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 23:16:37 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 23:16:37 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 23:16:37 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 23:16:37 policy-apex-pdp | sasl.mechanism = GSSAPI 23:16:37 policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 23:16:37 policy-apex-pdp | sasl.oauthbearer.expected.audience = null 23:16:37 policy-apex-pdp | sasl.oauthbearer.expected.issuer = null 23:16:37 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:37 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:37 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:37 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null 23:16:37 policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope 23:16:37 policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub 23:16:37 policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null 23:16:37 policy-apex-pdp | security.protocol = PLAINTEXT 23:16:37 policy-apex-pdp | security.providers = null 23:16:37 policy-apex-pdp | send.buffer.bytes = 131072 23:16:37 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 23:16:37 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 23:16:37 policy-apex-pdp | ssl.cipher.suites = null 23:16:37 policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:37 policy-apex-pdp | ssl.endpoint.identification.algorithm = https 23:16:37 policy-apex-pdp | ssl.engine.factory.class = null 23:16:37 policy-apex-pdp | ssl.key.password = null 23:16:37 policy-apex-pdp | ssl.keymanager.algorithm = SunX509 23:16:37 policy-apex-pdp | ssl.keystore.certificate.chain = null 23:16:37 policy-apex-pdp | ssl.keystore.key = null 23:16:37 policy-apex-pdp | ssl.keystore.location = null 23:16:37 policy-apex-pdp | ssl.keystore.password = null 23:16:37 policy-apex-pdp | ssl.keystore.type = JKS 23:16:37 policy-apex-pdp | ssl.protocol = TLSv1.3 23:16:37 policy-apex-pdp | ssl.provider = null 23:16:37 policy-apex-pdp | ssl.secure.random.implementation = null 23:16:37 policy-apex-pdp | ssl.trustmanager.algorithm = PKIX 23:16:37 policy-apex-pdp | ssl.truststore.certificates = null 23:16:37 policy-apex-pdp | ssl.truststore.location = null 23:16:37 policy-apex-pdp | ssl.truststore.password = null 23:16:37 policy-apex-pdp | ssl.truststore.type = JKS 23:16:37 policy-apex-pdp | transaction.timeout.ms = 60000 23:16:37 policy-apex-pdp | transactional.id = null 23:16:37 policy-apex-pdp | value.serializer = class org.apache.kafka.common.serialization.StringSerializer 23:16:37 policy-apex-pdp | 23:16:37 policy-apex-pdp | [2024-04-10T23:14:41.486+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. 23:16:37 policy-apex-pdp | [2024-04-10T23:14:41.505+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 23:16:37 policy-apex-pdp | [2024-04-10T23:14:41.505+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 23:16:37 policy-apex-pdp | [2024-04-10T23:14:41.505+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1712790881505 23:16:37 policy-apex-pdp | [2024-04-10T23:14:41.506+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=8efa43fc-60a3-4a91-96f7-185938e69330, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created 23:16:37 policy-apex-pdp | [2024-04-10T23:14:41.506+00:00|INFO|ServiceManager|main] service manager starting set alive 23:16:37 policy-apex-pdp | [2024-04-10T23:14:41.506+00:00|INFO|ServiceManager|main] service manager starting register pdp status context object 23:16:37 policy-apex-pdp | [2024-04-10T23:14:41.509+00:00|INFO|ServiceManager|main] service manager starting topic sinks 23:16:37 policy-apex-pdp | [2024-04-10T23:14:41.509+00:00|INFO|ServiceManager|main] service manager starting Pdp Status publisher 23:16:37 policy-apex-pdp | [2024-04-10T23:14:41.512+00:00|INFO|ServiceManager|main] service manager starting Register pdp update listener 23:16:37 policy-apex-pdp | [2024-04-10T23:14:41.512+00:00|INFO|ServiceManager|main] service manager starting Register pdp state change request dispatcher 23:16:37 policy-apex-pdp | [2024-04-10T23:14:41.512+00:00|INFO|ServiceManager|main] service manager starting Message Dispatcher 23:16:37 policy-apex-pdp | [2024-04-10T23:14:41.512+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=8c9f1915-d141-4575-8b29-0255c152ac0a, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@e077866 23:16:37 policy-apex-pdp | [2024-04-10T23:14:41.512+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=8c9f1915-d141-4575-8b29-0255c152ac0a, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: register: start not attempted 23:16:37 policy-apex-pdp | [2024-04-10T23:14:41.513+00:00|INFO|ServiceManager|main] service manager starting Create REST server 23:16:37 policy-apex-pdp | [2024-04-10T23:14:41.558+00:00|INFO|OrderedServiceImpl|Timer-0] ***** OrderedServiceImpl implementers: 23:16:37 policy-apex-pdp | [] 23:16:37 policy-apex-pdp | [2024-04-10T23:14:41.560+00:00|INFO|network|Timer-0] [OUT|KAFKA|policy-pdp-pap] 23:16:37 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"27a8a30a-eb9f-446e-a8f9-e99c6b0786cb","timestampMs":1712790881516,"name":"apex-89e6e63c-e6e7-4365-a78f-3939e2d58a8b","pdpGroup":"defaultGroup"} 23:16:37 policy-apex-pdp | [2024-04-10T23:14:41.812+00:00|INFO|ServiceManager|main] service manager starting Rest Server 23:16:37 policy-apex-pdp | [2024-04-10T23:14:41.812+00:00|INFO|ServiceManager|main] service manager starting 23:16:37 policy-apex-pdp | [2024-04-10T23:14:41.812+00:00|INFO|ServiceManager|main] service manager starting REST RestServerParameters 23:16:37 policy-apex-pdp | [2024-04-10T23:14:41.812+00:00|INFO|JettyServletServer|main] JettyJerseyServer [Jerseyservlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-3591009c==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@634b550e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-64c2b546==org.glassfish.jersey.servlet.ServletContainer@2d1bc350{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=Server@63f34b70{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@641856{/,null,STOPPED}, connector=RestServerParameters@5d25e6bb{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=null, servlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-3591009c==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@634b550e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-64c2b546==org.glassfish.jersey.servlet.ServletContainer@2d1bc350{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:16:37 policy-apex-pdp | [2024-04-10T23:14:41.827+00:00|INFO|ServiceManager|main] service manager started 23:16:37 policy-apex-pdp | [2024-04-10T23:14:41.827+00:00|INFO|ServiceManager|main] service manager started 23:16:37 policy-apex-pdp | [2024-04-10T23:14:41.827+00:00|INFO|ApexStarterMain|main] Started policy-apex-pdp service successfully. 23:16:37 simulator | Policy simulator config file: /opt/app/policy/simulators/etc/mounted/simParameters.json 23:16:37 simulator | overriding logback.xml 23:16:37 simulator | 2024-04-10 23:14:03,472 INFO replacing 'HOST_NAME' with simulator in /opt/app/policy/simulators/etc/mounted/simParameters.json 23:16:37 simulator | 2024-04-10 23:14:03,550 INFO org.onap.policy.models.simulators starting 23:16:37 simulator | 2024-04-10 23:14:03,551 INFO org.onap.policy.models.simulators starting CDS gRPC Server Properties 23:16:37 simulator | 2024-04-10 23:14:03,760 INFO org.onap.policy.models.simulators starting org.onap.policy.simulators.AaiSimulatorJaxRs_RESOURCE_LOCATION 23:16:37 simulator | 2024-04-10 23:14:03,762 INFO org.onap.policy.models.simulators starting A&AI simulator 23:16:37 simulator | 2024-04-10 23:14:03,947 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-33aeca0b==org.glassfish.jersey.servlet.ServletContainer@bff81822{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@2a2c13a8{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b6b1987{/,null,STOPPED}, connector=A&AI simulator@7d42c224{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-33aeca0b==org.glassfish.jersey.servlet.ServletContainer@bff81822{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 23:16:37 simulator | 2024-04-10 23:14:03,960 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-33aeca0b==org.glassfish.jersey.servlet.ServletContainer@bff81822{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@2a2c13a8{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b6b1987{/,null,STOPPED}, connector=A&AI simulator@7d42c224{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-33aeca0b==org.glassfish.jersey.servlet.ServletContainer@bff81822{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:16:37 simulator | 2024-04-10 23:14:03,964 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-33aeca0b==org.glassfish.jersey.servlet.ServletContainer@bff81822{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@2a2c13a8{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b6b1987{/,null,STOPPED}, connector=A&AI simulator@7d42c224{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=Thread[A&AI simulator-6666,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-33aeca0b==org.glassfish.jersey.servlet.ServletContainer@bff81822{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:16:37 simulator | 2024-04-10 23:14:03,968 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.10+7-alpine-r0 23:16:37 simulator | 2024-04-10 23:14:04,057 INFO Session workerName=node0 23:16:37 simulator | 2024-04-10 23:14:04,705 INFO Using GSON for REST calls 23:16:37 simulator | 2024-04-10 23:14:04,784 INFO Started o.e.j.s.ServletContextHandler@b6b1987{/,null,AVAILABLE} 23:16:37 simulator | 2024-04-10 23:14:04,792 INFO Started A&AI simulator@7d42c224{HTTP/1.1, (http/1.1)}{0.0.0.0:6666} 23:16:37 simulator | 2024-04-10 23:14:04,801 INFO Started Server@2a2c13a8{STARTING}[11.0.20,sto=0] @1900ms 23:16:37 simulator | 2024-04-10 23:14:04,802 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-33aeca0b==org.glassfish.jersey.servlet.ServletContainer@bff81822{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@2a2c13a8{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@b6b1987{/,null,AVAILABLE}, connector=A&AI simulator@7d42c224{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=Thread[A&AI simulator-6666,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-33aeca0b==org.glassfish.jersey.servlet.ServletContainer@bff81822{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4161 ms. 23:16:37 simulator | 2024-04-10 23:14:04,806 INFO org.onap.policy.models.simulators starting SDNC simulator 23:16:37 simulator | 2024-04-10 23:14:04,809 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-45e37a7e==org.glassfish.jersey.servlet.ServletContainer@95a48755{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@62452cc9{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@6941827a{/,null,STOPPED}, connector=SDNC simulator@3e10dc6{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-45e37a7e==org.glassfish.jersey.servlet.ServletContainer@95a48755{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 23:16:37 simulator | 2024-04-10 23:14:04,809 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-45e37a7e==org.glassfish.jersey.servlet.ServletContainer@95a48755{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@62452cc9{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@6941827a{/,null,STOPPED}, connector=SDNC simulator@3e10dc6{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-45e37a7e==org.glassfish.jersey.servlet.ServletContainer@95a48755{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:16:37 simulator | 2024-04-10 23:14:04,810 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-45e37a7e==org.glassfish.jersey.servlet.ServletContainer@95a48755{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@62452cc9{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@6941827a{/,null,STOPPED}, connector=SDNC simulator@3e10dc6{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=Thread[SDNC simulator-6668,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-45e37a7e==org.glassfish.jersey.servlet.ServletContainer@95a48755{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:16:37 simulator | 2024-04-10 23:14:04,811 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.10+7-alpine-r0 23:16:37 simulator | 2024-04-10 23:14:04,821 INFO Session workerName=node0 23:16:37 simulator | 2024-04-10 23:14:04,902 INFO Using GSON for REST calls 23:16:37 simulator | 2024-04-10 23:14:04,913 INFO Started o.e.j.s.ServletContextHandler@6941827a{/,null,AVAILABLE} 23:16:37 simulator | 2024-04-10 23:14:04,922 INFO Started SDNC simulator@3e10dc6{HTTP/1.1, (http/1.1)}{0.0.0.0:6668} 23:16:37 policy-db-migrator | > upgrade 0820-toscatrigger.sql 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscatrigger (ACTION VARCHAR(255) NULL, toscaCondition LONGBLOB DEFAULT NULL, toscaConstraint LONGBLOB DEFAULT NULL, `DESCRIPTION` VARCHAR(255) NULL, EVALUATIONS INT DEFAULT NULL, EVENTTYPE VARCHAR(255) NULL, METHOD VARCHAR(255) NULL, `PERIOD` LONGBLOB DEFAULT NULL, SCHEDULE LONGBLOB DEFAULT NULL, TARGETFILTER LONGBLOB DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_TOSCATRIGGER (parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | > upgrade 0830-FK_ToscaNodeTemplate_capabilitiesName.sql 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | CREATE INDEX FK_ToscaNodeTemplate_capabilitiesName ON toscanodetemplate(capabilitiesName, capabilitiesVersion) 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | > upgrade 0840-FK_ToscaNodeTemplate_requirementsName.sql 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | CREATE INDEX FK_ToscaNodeTemplate_requirementsName ON toscanodetemplate(requirementsName, requirementsVersion) 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | > upgrade 0850-FK_ToscaNodeType_requirementsName.sql 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | CREATE INDEX FK_ToscaNodeType_requirementsName ON toscanodetype(requirementsName, requirementsVersion) 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | > upgrade 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_capabilityTypesName ON toscaservicetemplate(capabilityTypesName, capabilityTypesVersion) 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | > upgrade 0870-FK_ToscaServiceTemplate_dataTypesName.sql 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_dataTypesName ON toscaservicetemplate(dataTypesName, dataTypesVersion) 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | > upgrade 0880-FK_ToscaServiceTemplate_nodeTypesName.sql 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_nodeTypesName ON toscaservicetemplate(nodeTypesName, nodeTypesVersion) 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | > upgrade 0890-FK_ToscaServiceTemplate_policyTypesName.sql 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_policyTypesName ON toscaservicetemplate(policyTypesName, policyTypesVersion) 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | > upgrade 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_relationshipTypesName ON toscaservicetemplate(relationshipTypesName, relationshipTypesVersion) 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | > upgrade 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | CREATE INDEX FK_ToscaTopologyTemplate_nodeTemplatesName ON toscatopologytemplate(nodeTemplatesName, nodeTemplatessVersion) 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | > upgrade 0920-FK_ToscaTopologyTemplate_policyName.sql 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | CREATE INDEX FK_ToscaTopologyTemplate_policyName ON toscatopologytemplate(policyName, policyVersion) 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | > upgrade 0940-PdpPolicyStatus_PdpGroup.sql 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | CREATE INDEX PdpPolicyStatus_PdpGroup ON pdppolicystatus(PDPGROUP) 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | > upgrade 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | CREATE INDEX TscaServiceTemplatetopologyTemplateParentLocalName ON toscaservicetemplate(topologyTemplateParentLocalName, topologyTemplateLocalName, topologyTemplateParentKeyVersion, topologyTemplateParentKeyName) 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | > upgrade 0960-FK_ToscaNodeTemplate_capabilitiesName.sql 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | ALTER TABLE toscanodetemplate ADD CONSTRAINT FK_ToscaNodeTemplate_capabilitiesName FOREIGN KEY (capabilitiesName, capabilitiesVersion) REFERENCES toscacapabilityassignments (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | > upgrade 0970-FK_ToscaNodeTemplate_requirementsName.sql 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | ALTER TABLE toscanodetemplate ADD CONSTRAINT FK_ToscaNodeTemplate_requirementsName FOREIGN KEY (requirementsName, requirementsVersion) REFERENCES toscarequirements (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | > upgrade 0980-FK_ToscaNodeType_requirementsName.sql 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | ALTER TABLE toscanodetype ADD CONSTRAINT FK_ToscaNodeType_requirementsName FOREIGN KEY (requirementsName, requirementsVersion) REFERENCES toscarequirements (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | > upgrade 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_capabilityTypesName FOREIGN KEY (capabilityTypesName, capabilityTypesVersion) REFERENCES toscacapabilitytypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | 23:16:37 simulator | 2024-04-10 23:14:04,922 INFO Started Server@62452cc9{STARTING}[11.0.20,sto=0] @2020ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.306693278Z level=info msg="Executing migration" id="alter alert_definition_version table data column to mediumtext in mysql" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.306904863Z level=info msg="Migration successfully executed" id="alter alert_definition_version table data column to mediumtext in mysql" duration=211.425µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.310448641Z level=info msg="Executing migration" id="drop alert_definition_version table" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.311941909Z level=info msg="Migration successfully executed" id="drop alert_definition_version table" duration=1.490118ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.317378525Z level=info msg="Executing migration" id="create alert_instance table" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.318480402Z level=info msg="Migration successfully executed" id="create alert_instance table" duration=1.101108ms 23:16:37 policy-pap | sasl.mechanism = GSSAPI 23:16:37 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 23:16:37 policy-pap | sasl.oauthbearer.expected.audience = null 23:16:37 policy-pap | sasl.oauthbearer.expected.issuer = null 23:16:37 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:37 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:37 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:37 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 23:16:37 policy-pap | sasl.oauthbearer.scope.claim.name = scope 23:16:37 policy-pap | sasl.oauthbearer.sub.claim.name = sub 23:16:37 policy-pap | sasl.oauthbearer.token.endpoint.url = null 23:16:37 policy-pap | security.protocol = PLAINTEXT 23:16:37 policy-pap | security.providers = null 23:16:37 policy-pap | send.buffer.bytes = 131072 23:16:37 policy-pap | session.timeout.ms = 45000 23:16:37 policy-pap | socket.connection.setup.timeout.max.ms = 30000 23:16:37 policy-pap | socket.connection.setup.timeout.ms = 10000 23:16:37 policy-pap | ssl.cipher.suites = null 23:16:37 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:37 policy-pap | ssl.endpoint.identification.algorithm = https 23:16:37 policy-pap | ssl.engine.factory.class = null 23:16:37 policy-pap | ssl.key.password = null 23:16:37 policy-pap | ssl.keymanager.algorithm = SunX509 23:16:37 policy-pap | ssl.keystore.certificate.chain = null 23:16:37 policy-pap | ssl.keystore.key = null 23:16:37 policy-pap | ssl.keystore.location = null 23:16:37 policy-pap | ssl.keystore.password = null 23:16:37 policy-pap | ssl.keystore.type = JKS 23:16:37 policy-pap | ssl.protocol = TLSv1.3 23:16:37 policy-pap | ssl.provider = null 23:16:37 policy-pap | ssl.secure.random.implementation = null 23:16:37 policy-pap | ssl.trustmanager.algorithm = PKIX 23:16:37 policy-pap | ssl.truststore.certificates = null 23:16:37 policy-pap | ssl.truststore.location = null 23:16:37 policy-pap | ssl.truststore.password = null 23:16:37 policy-pap | ssl.truststore.type = JKS 23:16:37 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:37 policy-pap | 23:16:37 policy-pap | [2024-04-10T23:14:37.424+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 23:16:37 policy-pap | [2024-04-10T23:14:37.424+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 23:16:37 policy-pap | [2024-04-10T23:14:37.424+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1712790877421 23:16:37 policy-pap | [2024-04-10T23:14:37.428+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-9f4a6b38-834c-48e5-bf2a-977246f9eaf0-1, groupId=9f4a6b38-834c-48e5-bf2a-977246f9eaf0] Subscribed to topic(s): policy-pdp-pap 23:16:37 policy-pap | [2024-04-10T23:14:37.429+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 23:16:37 policy-pap | allow.auto.create.topics = true 23:16:37 policy-pap | auto.commit.interval.ms = 5000 23:16:37 policy-pap | auto.include.jmx.reporter = true 23:16:37 policy-pap | auto.offset.reset = latest 23:16:37 policy-pap | bootstrap.servers = [kafka:9092] 23:16:37 policy-pap | check.crcs = true 23:16:37 policy-pap | client.dns.lookup = use_all_dns_ips 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.322319028Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.323959728Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" duration=1.64016ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.329289851Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, current_state columns" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.330567583Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, current_state columns" duration=1.279542ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.33647117Z level=info msg="Executing migration" id="add column current_state_end to alert_instance" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.342383878Z level=info msg="Migration successfully executed" id="add column current_state_end to alert_instance" duration=5.912307ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.346155041Z level=info msg="Executing migration" id="remove index def_org_id, def_uid, current_state on alert_instance" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.347103385Z level=info msg="Migration successfully executed" id="remove index def_org_id, def_uid, current_state on alert_instance" duration=949.344µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.351402682Z level=info msg="Executing migration" id="remove index def_org_id, current_state on alert_instance" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.352349225Z level=info msg="Migration successfully executed" id="remove index def_org_id, current_state on alert_instance" duration=946.783µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.355975606Z level=info msg="Executing migration" id="rename def_org_id to rule_org_id in alert_instance" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.386304982Z level=info msg="Migration successfully executed" id="rename def_org_id to rule_org_id in alert_instance" duration=30.322996ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.39146917Z level=info msg="Executing migration" id="rename def_uid to rule_uid in alert_instance" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.417499759Z level=info msg="Migration successfully executed" id="rename def_uid to rule_uid in alert_instance" duration=26.034729ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.422940335Z level=info msg="Executing migration" id="add index rule_org_id, rule_uid, current_state on alert_instance" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.423670713Z level=info msg="Migration successfully executed" id="add index rule_org_id, rule_uid, current_state on alert_instance" duration=730.168µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.42838564Z level=info msg="Executing migration" id="add index rule_org_id, current_state on alert_instance" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.430038731Z level=info msg="Migration successfully executed" id="add index rule_org_id, current_state on alert_instance" duration=1.652861ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.436380169Z level=info msg="Executing migration" id="add current_reason column related to current_state" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.442116262Z level=info msg="Migration successfully executed" id="add current_reason column related to current_state" duration=5.736463ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.445932307Z level=info msg="Executing migration" id="add result_fingerprint column to alert_instance" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.451873435Z level=info msg="Migration successfully executed" id="add result_fingerprint column to alert_instance" duration=5.941238ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.458528051Z level=info msg="Executing migration" id="create alert_rule table" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.460392517Z level=info msg="Migration successfully executed" id="create alert_rule table" duration=1.868927ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.465945526Z level=info msg="Executing migration" id="add index in alert_rule on org_id and title columns" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.466991871Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and title columns" duration=1.046215ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.472330825Z level=info msg="Executing migration" id="add index in alert_rule on org_id and uid columns" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.473384731Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and uid columns" duration=1.052716ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.477864782Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.479528544Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" duration=1.662862ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.486806125Z level=info msg="Executing migration" id="alter alert_rule table data column to mediumtext in mysql" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.486879877Z level=info msg="Migration successfully executed" id="alter alert_rule table data column to mediumtext in mysql" duration=74.932µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.490536649Z level=info msg="Executing migration" id="add column for to alert_rule" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.496588929Z level=info msg="Migration successfully executed" id="add column for to alert_rule" duration=6.05225ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.501717607Z level=info msg="Executing migration" id="add column annotations to alert_rule" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.507521682Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule" duration=5.803945ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.511007108Z level=info msg="Executing migration" id="add column labels to alert_rule" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.516796813Z level=info msg="Migration successfully executed" id="add column labels to alert_rule" duration=5.789024ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.521747956Z level=info msg="Executing migration" id="remove unique index from alert_rule on org_id, title columns" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.522632478Z level=info msg="Migration successfully executed" id="remove unique index from alert_rule on org_id, title columns" duration=883.902µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.526151315Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespase_uid and title columns" 23:16:37 kafka | [2024-04-10 23:14:05,819] INFO [Controller id=1] Deleting isr change notifications (kafka.controller.KafkaController) 23:16:37 kafka | [2024-04-10 23:14:05,824] INFO [Controller id=1] Initializing controller context (kafka.controller.KafkaController) 23:16:37 kafka | [2024-04-10 23:14:05,827] INFO [ExpirationReaper-1-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:16:37 kafka | [2024-04-10 23:14:05,840] INFO [Controller id=1] Initialized broker epochs cache: HashMap(1 -> 27) (kafka.controller.KafkaController) 23:16:37 kafka | [2024-04-10 23:14:05,846] DEBUG [Controller id=1] Register BrokerModifications handler for Set(1) (kafka.controller.KafkaController) 23:16:37 kafka | [2024-04-10 23:14:05,851] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread) 23:16:37 kafka | [2024-04-10 23:14:05,857] DEBUG [Channel manager on controller 1]: Controller 1 trying to connect to broker 1 (kafka.controller.ControllerChannelManager) 23:16:37 kafka | [2024-04-10 23:14:05,867] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Enabling request processing. (kafka.network.SocketServer) 23:16:37 policy-db-migrator | > upgrade 1000-FK_ToscaServiceTemplate_dataTypesName.sql 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_dataTypesName FOREIGN KEY (dataTypesName, dataTypesVersion) REFERENCES toscadatatypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | > upgrade 1010-FK_ToscaServiceTemplate_nodeTypesName.sql 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_nodeTypesName FOREIGN KEY (nodeTypesName, nodeTypesVersion) REFERENCES toscanodetypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | > upgrade 1020-FK_ToscaServiceTemplate_policyTypesName.sql 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_policyTypesName FOREIGN KEY (policyTypesName, policyTypesVersion) REFERENCES toscapolicytypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | > upgrade 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_relationshipTypesName FOREIGN KEY (relationshipTypesName, relationshipTypesVersion) REFERENCES toscarelationshiptypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | > upgrade 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | ALTER TABLE toscatopologytemplate ADD CONSTRAINT FK_ToscaTopologyTemplate_nodeTemplatesName FOREIGN KEY (nodeTemplatesName, nodeTemplatessVersion) REFERENCES toscanodetemplates (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | > upgrade 1050-FK_ToscaTopologyTemplate_policyName.sql 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | ALTER TABLE toscatopologytemplate ADD CONSTRAINT FK_ToscaTopologyTemplate_policyName FOREIGN KEY (policyName, policyVersion) REFERENCES toscapolicies (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.527576231Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespase_uid and title columns" duration=1.424236ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.533880448Z level=info msg="Executing migration" id="add dashboard_uid column to alert_rule" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.539795825Z level=info msg="Migration successfully executed" id="add dashboard_uid column to alert_rule" duration=5.911777ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.544287278Z level=info msg="Executing migration" id="add panel_id column to alert_rule" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.550115693Z level=info msg="Migration successfully executed" id="add panel_id column to alert_rule" duration=5.827945ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.571147427Z level=info msg="Executing migration" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" 23:16:37 simulator | 2024-04-10 23:14:04,923 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-45e37a7e==org.glassfish.jersey.servlet.ServletContainer@95a48755{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@62452cc9{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@6941827a{/,null,AVAILABLE}, connector=SDNC simulator@3e10dc6{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=Thread[SDNC simulator-6668,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-45e37a7e==org.glassfish.jersey.servlet.ServletContainer@95a48755{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4887 ms. 23:16:37 simulator | 2024-04-10 23:14:04,924 INFO org.onap.policy.models.simulators starting SO simulator 23:16:37 simulator | 2024-04-10 23:14:04,931 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-7516e4e5==org.glassfish.jersey.servlet.ServletContainer@74ca99b0{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@488eb7f2{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@5e81e5ac{/,null,STOPPED}, connector=SO simulator@5bc9ba1d{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-7516e4e5==org.glassfish.jersey.servlet.ServletContainer@74ca99b0{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 23:16:37 simulator | 2024-04-10 23:14:04,931 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-7516e4e5==org.glassfish.jersey.servlet.ServletContainer@74ca99b0{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@488eb7f2{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@5e81e5ac{/,null,STOPPED}, connector=SO simulator@5bc9ba1d{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-7516e4e5==org.glassfish.jersey.servlet.ServletContainer@74ca99b0{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:16:37 simulator | 2024-04-10 23:14:04,932 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-7516e4e5==org.glassfish.jersey.servlet.ServletContainer@74ca99b0{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@488eb7f2{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@5e81e5ac{/,null,STOPPED}, connector=SO simulator@5bc9ba1d{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=Thread[SO simulator-6669,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-7516e4e5==org.glassfish.jersey.servlet.ServletContainer@74ca99b0{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:16:37 simulator | 2024-04-10 23:14:04,935 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.10+7-alpine-r0 23:16:37 simulator | 2024-04-10 23:14:04,942 INFO Session workerName=node0 23:16:37 simulator | 2024-04-10 23:14:05,037 INFO Using GSON for REST calls 23:16:37 simulator | 2024-04-10 23:14:05,052 INFO Started o.e.j.s.ServletContextHandler@5e81e5ac{/,null,AVAILABLE} 23:16:37 simulator | 2024-04-10 23:14:05,053 INFO Started SO simulator@5bc9ba1d{HTTP/1.1, (http/1.1)}{0.0.0.0:6669} 23:16:37 simulator | 2024-04-10 23:14:05,054 INFO Started Server@488eb7f2{STARTING}[11.0.20,sto=0] @2152ms 23:16:37 simulator | 2024-04-10 23:14:05,054 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-7516e4e5==org.glassfish.jersey.servlet.ServletContainer@74ca99b0{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@488eb7f2{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@5e81e5ac{/,null,AVAILABLE}, connector=SO simulator@5bc9ba1d{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=Thread[SO simulator-6669,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-7516e4e5==org.glassfish.jersey.servlet.ServletContainer@74ca99b0{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4878 ms. 23:16:37 simulator | 2024-04-10 23:14:05,055 INFO org.onap.policy.models.simulators starting VFC simulator 23:16:37 simulator | 2024-04-10 23:14:05,063 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-6f0b0a5e==org.glassfish.jersey.servlet.ServletContainer@2d9a8171{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@6035b93b{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@320de594{/,null,STOPPED}, connector=VFC simulator@3fa2213{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-6f0b0a5e==org.glassfish.jersey.servlet.ServletContainer@2d9a8171{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 23:16:37 simulator | 2024-04-10 23:14:05,063 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-6f0b0a5e==org.glassfish.jersey.servlet.ServletContainer@2d9a8171{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@6035b93b{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@320de594{/,null,STOPPED}, connector=VFC simulator@3fa2213{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-6f0b0a5e==org.glassfish.jersey.servlet.ServletContainer@2d9a8171{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:16:37 simulator | 2024-04-10 23:14:05,065 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-6f0b0a5e==org.glassfish.jersey.servlet.ServletContainer@2d9a8171{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@6035b93b{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@320de594{/,null,STOPPED}, connector=VFC simulator@3fa2213{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=Thread[VFC simulator-6670,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-6f0b0a5e==org.glassfish.jersey.servlet.ServletContainer@2d9a8171{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:16:37 simulator | 2024-04-10 23:14:05,066 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.10+7-alpine-r0 23:16:37 simulator | 2024-04-10 23:14:05,069 INFO Session workerName=node0 23:16:37 simulator | 2024-04-10 23:14:05,123 INFO Using GSON for REST calls 23:16:37 simulator | 2024-04-10 23:14:05,133 INFO Started o.e.j.s.ServletContextHandler@320de594{/,null,AVAILABLE} 23:16:37 simulator | 2024-04-10 23:14:05,135 INFO Started VFC simulator@3fa2213{HTTP/1.1, (http/1.1)}{0.0.0.0:6670} 23:16:37 simulator | 2024-04-10 23:14:05,135 INFO Started Server@6035b93b{STARTING}[11.0.20,sto=0] @2233ms 23:16:37 simulator | 2024-04-10 23:14:05,135 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-6f0b0a5e==org.glassfish.jersey.servlet.ServletContainer@2d9a8171{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@6035b93b{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@320de594{/,null,AVAILABLE}, connector=VFC simulator@3fa2213{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=Thread[VFC simulator-6670,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-6f0b0a5e==org.glassfish.jersey.servlet.ServletContainer@2d9a8171{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4930 ms. 23:16:37 simulator | 2024-04-10 23:14:05,136 INFO org.onap.policy.models.simulators started 23:16:37 policy-db-migrator | > upgrade 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT TscaServiceTemplatetopologyTemplateParentLocalName FOREIGN KEY (topologyTemplateParentLocalName, topologyTemplateLocalName, topologyTemplateParentKeyVersion, topologyTemplateParentKeyName) REFERENCES toscatopologytemplate (parentLocalName, localName, parentKeyVersion, parentKeyName) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | > upgrade 0100-pdp.sql 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | ALTER TABLE pdp ADD COLUMN LASTUPDATE datetime NOT NULL DEFAULT CURRENT_TIMESTAMP AFTER HEALTHY 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | > upgrade 0110-idx_tsidx1.sql 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | CREATE INDEX IDX_TSIDX1 ON pdpstatistics(timeStamp, name, version) 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | > upgrade 0120-pk_pdpstatistics.sql 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | ALTER TABLE pdpstatistics DROP PRIMARY KEY 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | > upgrade 0130-pdpstatistics.sql 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | ALTER TABLE pdpstatistics ADD COLUMN POLICYUNDEPLOYCOUNT BIGINT DEFAULT NULL AFTER POLICYEXECUTEDSUCCESSCOUNT, ADD COLUMN POLICYUNDEPLOYFAILCOUNT BIGINT DEFAULT NULL, ADD COLUMN POLICYUNDEPLOYSUCCESSCOUNT BIGINT DEFAULT NULL, ADD COLUMN ID BIGINT NOT NULL 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | > upgrade 0140-pk_pdpstatistics.sql 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | UPDATE pdpstatistics as p JOIN (SELECT name, version, timeStamp, ROW_NUMBER() OVER (ORDER BY timeStamp ASC) AS row_num FROM pdpstatistics GROUP BY name, version, timeStamp) AS t ON (p.name=t.name AND p.version=t.version AND p.timeStamp = t.timeStamp) SET p.id=t.row_num 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | ALTER TABLE pdpstatistics ADD CONSTRAINT PK_PDPSTATISTICS PRIMARY KEY (ID, name, version) 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | > upgrade 0150-pdpstatistics.sql 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | ALTER TABLE pdpstatistics MODIFY COLUMN timeStamp datetime(6) NULL 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | > upgrade 0160-jpapdpstatistics_enginestats.sql 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | ALTER TABLE jpapdpstatistics_enginestats ADD COLUMN ID BIGINT DEFAULT NULL AFTER UPTIME 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | > upgrade 0170-jpapdpstatistics_enginestats.sql 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | UPDATE jpapdpstatistics_enginestats a 23:16:37 policy-db-migrator | JOIN pdpstatistics b 23:16:37 policy-db-migrator | ON a.name = b.name AND a.version = b.version AND a.timeStamp = b.timeStamp 23:16:37 policy-db-migrator | SET a.id = b.id 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | > upgrade 0180-jpapdpstatistics_enginestats.sql 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | ALTER TABLE jpapdpstatistics_enginestats DROP COLUMN timeStamp 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | > upgrade 0190-jpapolicyaudit.sql 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapolicyaudit (ACTION INT DEFAULT NULL, PDPGROUP VARCHAR(255) NULL, PDPTYPE VARCHAR(255) NULL, TIMESTAMP datetime DEFAULT NULL, USER VARCHAR(255) NULL, ID BIGINT NOT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_JPAPOLICYAUDIT (ID, name, version)) 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | > upgrade 0200-JpaPolicyAuditIndex_timestamp.sql 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | CREATE INDEX JpaPolicyAuditIndex_timestamp ON jpapolicyaudit(TIMESTAMP) 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.573402543Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" duration=2.326587ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.5804876Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.586997181Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule" duration=6.514422ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.592211931Z level=info msg="Executing migration" id="add is_paused column to alert_rule table" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.598078858Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule table" duration=5.866916ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.601502873Z level=info msg="Executing migration" id="fix is_paused column for alert_rule table" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.601551994Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule table" duration=49.391µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.605077282Z level=info msg="Executing migration" id="create alert_rule_version table" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.605909693Z level=info msg="Migration successfully executed" id="create alert_rule_version table" duration=831.861µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.612281241Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.61464615Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=2.365219ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.620856775Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.621968623Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" duration=1.111338ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.627430169Z level=info msg="Executing migration" id="alter alert_rule_version table data column to mediumtext in mysql" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.627595093Z level=info msg="Migration successfully executed" id="alter alert_rule_version table data column to mediumtext in mysql" duration=164.354µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.631555162Z level=info msg="Executing migration" id="add column for to alert_rule_version" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.639343616Z level=info msg="Migration successfully executed" id="add column for to alert_rule_version" duration=7.788964ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.642997097Z level=info msg="Executing migration" id="add column annotations to alert_rule_version" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.651611261Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule_version" duration=8.611384ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.658548514Z level=info msg="Executing migration" id="add column labels to alert_rule_version" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.663819166Z level=info msg="Migration successfully executed" id="add column labels to alert_rule_version" duration=5.269262ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.668789379Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule_version" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.675087697Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule_version" duration=6.296808ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.678815749Z level=info msg="Executing migration" id="add is_paused column to alert_rule_versions table" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.685391043Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule_versions table" duration=6.573264ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.691406302Z level=info msg="Executing migration" id="fix is_paused column for alert_rule_version table" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.691451373Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule_version table" duration=45.161µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.697934096Z level=info msg="Executing migration" id=create_alert_configuration_table 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.699242618Z level=info msg="Migration successfully executed" id=create_alert_configuration_table duration=1.310903ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.704679483Z level=info msg="Executing migration" id="Add column default in alert_configuration" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.711088163Z level=info msg="Migration successfully executed" id="Add column default in alert_configuration" duration=6.40825ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.714673053Z level=info msg="Executing migration" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.714723884Z level=info msg="Migration successfully executed" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" duration=51.391µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.719720929Z level=info msg="Executing migration" id="add column org_id in alert_configuration" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.72983944Z level=info msg="Migration successfully executed" id="add column org_id in alert_configuration" duration=10.117781ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.73704389Z level=info msg="Executing migration" id="add index in alert_configuration table on org_id column" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.738161587Z level=info msg="Migration successfully executed" id="add index in alert_configuration table on org_id column" duration=1.115447ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.742761032Z level=info msg="Executing migration" id="add configuration_hash column to alert_configuration" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.752189157Z level=info msg="Migration successfully executed" id="add configuration_hash column to alert_configuration" duration=9.428635ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.757169521Z level=info msg="Executing migration" id=create_ngalert_configuration_table 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.758155895Z level=info msg="Migration successfully executed" id=create_ngalert_configuration_table duration=979.674µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.762353071Z level=info msg="Executing migration" id="add index in ngalert_configuration on org_id column" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.763842837Z level=info msg="Migration successfully executed" id="add index in ngalert_configuration on org_id column" duration=1.489187ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.771102398Z level=info msg="Executing migration" id="add column send_alerts_to in ngalert_configuration" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.780580894Z level=info msg="Migration successfully executed" id="add column send_alerts_to in ngalert_configuration" duration=9.479426ms 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | > upgrade 0210-sequence.sql 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | CREATE TABLE IF NOT EXISTS sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | > upgrade 0220-sequence.sql 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | INSERT INTO sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM pdpstatistics)) 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | > upgrade 0100-jpatoscapolicy_targets.sql 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | ALTER TABLE jpatoscapolicy_targets ADD COLUMN toscaPolicyName VARCHAR(120) NOT NULL, ADD COLUMN toscaPolicyVersion VARCHAR(20) NOT NULL, ADD CONSTRAINT PK_JPATOSCAPOLICY_TARGETS PRIMARY KEY (toscaPolicyName, toscaPolicyVersion) 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | > upgrade 0110-jpatoscapolicytype_targets.sql 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | ALTER TABLE jpatoscapolicytype_targets ADD COLUMN toscaPolicyTypeName VARCHAR(120) NOT NULL, ADD COLUMN toscaPolicyTypeVersion VARCHAR(20) NOT NULL, ADD CONSTRAINT PK_JPATOSCAPOLICYTYPE_TARGETS PRIMARY KEY (toscaPolicyTypeName, toscaPolicyTypeVersion) 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | > upgrade 0120-toscatrigger.sql 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | DROP TABLE IF EXISTS toscatrigger 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | > upgrade 0130-jpatoscapolicytype_triggers.sql 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | ALTER TABLE jpatoscapolicytype_triggers MODIFY COLUMN triggers LONGBLOB 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | > upgrade 0140-toscaparameter.sql 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | DROP TABLE IF EXISTS toscaparameter 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | > upgrade 0150-toscaproperty.sql 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | DROP TABLE IF EXISTS jpatoscaproperty_constraints 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | DROP TABLE IF EXISTS jpatoscaproperty_metadata 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.785291152Z level=info msg="Executing migration" id="create provenance_type table" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.786124342Z level=info msg="Migration successfully executed" id="create provenance_type table" duration=832.94µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.791349213Z level=info msg="Executing migration" id="add index to uniquify (record_key, record_type, org_id) columns" 23:16:37 policy-apex-pdp | [2024-04-10T23:14:41.828+00:00|INFO|JettyServletServer|RestServerParameters-6969] JettyJerseyServer [Jerseyservlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-3591009c==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@634b550e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-64c2b546==org.glassfish.jersey.servlet.ServletContainer@2d1bc350{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=Server@63f34b70{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@641856{/,null,STOPPED}, connector=RestServerParameters@5d25e6bb{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=Thread[RestServerParameters-6969,5,main], servlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-3591009c==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@634b550e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-64c2b546==org.glassfish.jersey.servlet.ServletContainer@2d1bc350{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:16:37 policy-apex-pdp | [2024-04-10T23:14:41.950+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: mAFlxob1QoSnxAKM2SbgkA 23:16:37 policy-apex-pdp | [2024-04-10T23:14:41.950+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-8c9f1915-d141-4575-8b29-0255c152ac0a-2, groupId=8c9f1915-d141-4575-8b29-0255c152ac0a] Cluster ID: mAFlxob1QoSnxAKM2SbgkA 23:16:37 policy-apex-pdp | [2024-04-10T23:14:41.952+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-8c9f1915-d141-4575-8b29-0255c152ac0a-2, groupId=8c9f1915-d141-4575-8b29-0255c152ac0a] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) 23:16:37 policy-apex-pdp | [2024-04-10T23:14:41.953+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 2 with epoch 0 23:16:37 policy-apex-pdp | [2024-04-10T23:14:41.960+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-8c9f1915-d141-4575-8b29-0255c152ac0a-2, groupId=8c9f1915-d141-4575-8b29-0255c152ac0a] (Re-)joining group 23:16:37 policy-apex-pdp | [2024-04-10T23:14:41.980+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-8c9f1915-d141-4575-8b29-0255c152ac0a-2, groupId=8c9f1915-d141-4575-8b29-0255c152ac0a] Request joining group due to: need to re-join with the given member-id: consumer-8c9f1915-d141-4575-8b29-0255c152ac0a-2-dee66f2c-2405-421f-9cf7-51cebf354ae9 23:16:37 policy-apex-pdp | [2024-04-10T23:14:41.981+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-8c9f1915-d141-4575-8b29-0255c152ac0a-2, groupId=8c9f1915-d141-4575-8b29-0255c152ac0a] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) 23:16:37 policy-apex-pdp | [2024-04-10T23:14:41.981+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-8c9f1915-d141-4575-8b29-0255c152ac0a-2, groupId=8c9f1915-d141-4575-8b29-0255c152ac0a] (Re-)joining group 23:16:37 policy-apex-pdp | [2024-04-10T23:14:42.518+00:00|INFO|YamlMessageBodyHandler|RestServerParameters-6969] Accepting YAML for REST calls 23:16:37 policy-apex-pdp | [2024-04-10T23:14:42.518+00:00|INFO|GsonMessageBodyHandler|RestServerParameters-6969] Using GSON for REST calls 23:16:37 policy-apex-pdp | [2024-04-10T23:14:44.988+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-8c9f1915-d141-4575-8b29-0255c152ac0a-2, groupId=8c9f1915-d141-4575-8b29-0255c152ac0a] Successfully joined group with generation Generation{generationId=1, memberId='consumer-8c9f1915-d141-4575-8b29-0255c152ac0a-2-dee66f2c-2405-421f-9cf7-51cebf354ae9', protocol='range'} 23:16:37 policy-apex-pdp | [2024-04-10T23:14:44.998+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-8c9f1915-d141-4575-8b29-0255c152ac0a-2, groupId=8c9f1915-d141-4575-8b29-0255c152ac0a] Finished assignment for group at generation 1: {consumer-8c9f1915-d141-4575-8b29-0255c152ac0a-2-dee66f2c-2405-421f-9cf7-51cebf354ae9=Assignment(partitions=[policy-pdp-pap-0])} 23:16:37 policy-apex-pdp | [2024-04-10T23:14:45.008+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-8c9f1915-d141-4575-8b29-0255c152ac0a-2, groupId=8c9f1915-d141-4575-8b29-0255c152ac0a] Successfully synced group in generation Generation{generationId=1, memberId='consumer-8c9f1915-d141-4575-8b29-0255c152ac0a-2-dee66f2c-2405-421f-9cf7-51cebf354ae9', protocol='range'} 23:16:37 policy-apex-pdp | [2024-04-10T23:14:45.008+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-8c9f1915-d141-4575-8b29-0255c152ac0a-2, groupId=8c9f1915-d141-4575-8b29-0255c152ac0a] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) 23:16:37 policy-apex-pdp | [2024-04-10T23:14:45.010+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-8c9f1915-d141-4575-8b29-0255c152ac0a-2, groupId=8c9f1915-d141-4575-8b29-0255c152ac0a] Adding newly assigned partitions: policy-pdp-pap-0 23:16:37 policy-apex-pdp | [2024-04-10T23:14:45.018+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-8c9f1915-d141-4575-8b29-0255c152ac0a-2, groupId=8c9f1915-d141-4575-8b29-0255c152ac0a] Found no committed offset for partition policy-pdp-pap-0 23:16:37 policy-apex-pdp | [2024-04-10T23:14:45.031+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-8c9f1915-d141-4575-8b29-0255c152ac0a-2, groupId=8c9f1915-d141-4575-8b29-0255c152ac0a] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. 23:16:37 policy-apex-pdp | [2024-04-10T23:14:56.182+00:00|INFO|RequestLog|qtp1068445309-29] 172.17.0.3 - policyadmin [10/Apr/2024:23:14:56 +0000] "GET /metrics HTTP/1.1" 200 10651 "-" "Prometheus/2.51.1" 23:16:37 policy-apex-pdp | [2024-04-10T23:15:01.513+00:00|INFO|network|Timer-0] [OUT|KAFKA|policy-pdp-pap] 23:16:37 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"6d8dce48-0b3f-4528-9899-b13742555876","timestampMs":1712790901512,"name":"apex-89e6e63c-e6e7-4365-a78f-3939e2d58a8b","pdpGroup":"defaultGroup"} 23:16:37 policy-apex-pdp | [2024-04-10T23:15:01.544+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:37 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"6d8dce48-0b3f-4528-9899-b13742555876","timestampMs":1712790901512,"name":"apex-89e6e63c-e6e7-4365-a78f-3939e2d58a8b","pdpGroup":"defaultGroup"} 23:16:37 policy-apex-pdp | [2024-04-10T23:15:01.547+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 23:16:37 policy-apex-pdp | [2024-04-10T23:15:01.705+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:37 policy-apex-pdp | {"source":"pap-f0ecc202-f082-45c7-b7f8-f2f10d3ef31a","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"43b7d102-6385-499f-be91-353062d39071","timestampMs":1712790901635,"name":"apex-89e6e63c-e6e7-4365-a78f-3939e2d58a8b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:37 policy-apex-pdp | [2024-04-10T23:15:01.718+00:00|INFO|network|Timer-1] [OUT|KAFKA|policy-pdp-pap] 23:16:37 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"32d81460-be72-404b-b9df-8a5e51bd36ef","timestampMs":1712790901718,"name":"apex-89e6e63c-e6e7-4365-a78f-3939e2d58a8b","pdpGroup":"defaultGroup"} 23:16:37 policy-apex-pdp | [2024-04-10T23:15:01.718+00:00|WARN|Registry|KAFKA-source-policy-pdp-pap] replacing previously registered: object:pdp/status/publisher 23:16:37 policy-apex-pdp | [2024-04-10T23:15:01.720+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] 23:16:37 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"43b7d102-6385-499f-be91-353062d39071","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"a7abe712-a48f-4ca3-9540-31399f5f2837","timestampMs":1712790901720,"name":"apex-89e6e63c-e6e7-4365-a78f-3939e2d58a8b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:37 policy-apex-pdp | [2024-04-10T23:15:01.734+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:37 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"32d81460-be72-404b-b9df-8a5e51bd36ef","timestampMs":1712790901718,"name":"apex-89e6e63c-e6e7-4365-a78f-3939e2d58a8b","pdpGroup":"defaultGroup"} 23:16:37 policy-apex-pdp | [2024-04-10T23:15:01.734+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 23:16:37 policy-apex-pdp | [2024-04-10T23:15:01.740+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:37 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"43b7d102-6385-499f-be91-353062d39071","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"a7abe712-a48f-4ca3-9540-31399f5f2837","timestampMs":1712790901720,"name":"apex-89e6e63c-e6e7-4365-a78f-3939e2d58a8b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:37 policy-apex-pdp | [2024-04-10T23:15:01.743+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 23:16:37 kafka | [2024-04-10 23:14:05,870] INFO [RequestSendThread controllerId=1] Starting (kafka.controller.RequestSendThread) 23:16:37 kafka | [2024-04-10 23:14:05,872] INFO [Controller id=1] Currently active brokers in the cluster: Set(1) (kafka.controller.KafkaController) 23:16:37 kafka | [2024-04-10 23:14:05,873] INFO [Controller id=1] Currently shutting brokers in the cluster: HashSet() (kafka.controller.KafkaController) 23:16:37 kafka | [2024-04-10 23:14:05,873] INFO [Controller id=1] Current list of topics in the cluster: HashSet() (kafka.controller.KafkaController) 23:16:37 kafka | [2024-04-10 23:14:05,874] INFO [Controller id=1] Fetching topic deletions in progress (kafka.controller.KafkaController) 23:16:37 kafka | [2024-04-10 23:14:05,874] INFO Awaiting socket connections on 0.0.0.0:29092. (kafka.network.DataPlaneAcceptor) 23:16:37 kafka | [2024-04-10 23:14:05,877] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.DataPlaneAcceptor) 23:16:37 kafka | [2024-04-10 23:14:05,878] INFO [Controller id=1] List of topics to be deleted: (kafka.controller.KafkaController) 23:16:37 kafka | [2024-04-10 23:14:05,879] INFO [Controller id=1] List of topics ineligible for deletion: (kafka.controller.KafkaController) 23:16:37 kafka | [2024-04-10 23:14:05,879] INFO [Controller id=1] Initializing topic deletion manager (kafka.controller.KafkaController) 23:16:37 kafka | [2024-04-10 23:14:05,880] INFO [Topic Deletion Manager 1] Initializing manager with initial deletions: Set(), initial ineligible deletions: HashSet() (kafka.controller.TopicDeletionManager) 23:16:37 kafka | [2024-04-10 23:14:05,881] INFO [Controller id=1] Sending update metadata request (kafka.controller.KafkaController) 23:16:37 kafka | [2024-04-10 23:14:05,886] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 0 partitions (state.change.logger) 23:16:37 kafka | [2024-04-10 23:14:05,888] INFO Kafka version: 7.6.0-ccs (org.apache.kafka.common.utils.AppInfoParser) 23:16:37 kafka | [2024-04-10 23:14:05,888] INFO Kafka commitId: 1991cb733c81d6791626f88253a042b2ec835ab8 (org.apache.kafka.common.utils.AppInfoParser) 23:16:37 kafka | [2024-04-10 23:14:05,888] INFO Kafka startTimeMs: 1712790845881 (org.apache.kafka.common.utils.AppInfoParser) 23:16:37 kafka | [2024-04-10 23:14:05,890] INFO [KafkaServer id=1] started (kafka.server.KafkaServer) 23:16:37 kafka | [2024-04-10 23:14:05,894] INFO [ReplicaStateMachine controllerId=1] Initializing replica state (kafka.controller.ZkReplicaStateMachine) 23:16:37 kafka | [2024-04-10 23:14:05,895] INFO [ReplicaStateMachine controllerId=1] Triggering online replica state changes (kafka.controller.ZkReplicaStateMachine) 23:16:37 kafka | [2024-04-10 23:14:05,899] INFO [ReplicaStateMachine controllerId=1] Triggering offline replica state changes (kafka.controller.ZkReplicaStateMachine) 23:16:37 kafka | [2024-04-10 23:14:05,901] DEBUG [ReplicaStateMachine controllerId=1] Started replica state machine with initial state -> HashMap() (kafka.controller.ZkReplicaStateMachine) 23:16:37 kafka | [2024-04-10 23:14:05,904] INFO [PartitionStateMachine controllerId=1] Initializing partition state (kafka.controller.ZkPartitionStateMachine) 23:16:37 kafka | [2024-04-10 23:14:05,904] INFO [PartitionStateMachine controllerId=1] Triggering online partition state changes (kafka.controller.ZkPartitionStateMachine) 23:16:37 kafka | [2024-04-10 23:14:05,908] DEBUG [PartitionStateMachine controllerId=1] Started partition state machine with initial state -> HashMap() (kafka.controller.ZkPartitionStateMachine) 23:16:37 kafka | [2024-04-10 23:14:05,909] INFO [Controller id=1] Ready to serve as the new controller with epoch 1 (kafka.controller.KafkaController) 23:16:37 kafka | [2024-04-10 23:14:05,912] INFO [RequestSendThread controllerId=1] Controller 1 connected to kafka:9092 (id: 1 rack: null) for sending state change requests (kafka.controller.RequestSendThread) 23:16:37 kafka | [2024-04-10 23:14:05,927] INFO [Controller id=1] Partitions undergoing preferred replica election: (kafka.controller.KafkaController) 23:16:37 kafka | [2024-04-10 23:14:05,928] INFO [Controller id=1] Partitions that completed preferred replica election: (kafka.controller.KafkaController) 23:16:37 kafka | [2024-04-10 23:14:05,928] INFO [Controller id=1] Skipping preferred replica election for partitions due to topic deletion: (kafka.controller.KafkaController) 23:16:37 kafka | [2024-04-10 23:14:05,929] INFO [Controller id=1] Resuming preferred replica election for partitions: (kafka.controller.KafkaController) 23:16:37 kafka | [2024-04-10 23:14:05,930] INFO [Controller id=1] Starting replica leader election (PREFERRED) for partitions triggered by ZkTriggered (kafka.controller.KafkaController) 23:16:37 kafka | [2024-04-10 23:14:05,960] INFO [Controller id=1] Starting the controller scheduler (kafka.controller.KafkaController) 23:16:37 kafka | [2024-04-10 23:14:06,027] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 0 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 23:16:37 kafka | [2024-04-10 23:14:06,058] INFO [zk-broker-1-to-controller-forwarding-channel-manager]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) 23:16:37 kafka | [2024-04-10 23:14:06,085] INFO [zk-broker-1-to-controller-alter-partition-channel-manager]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) 23:16:37 kafka | [2024-04-10 23:14:10,962] INFO [Controller id=1] Processing automatic preferred replica leader election (kafka.controller.KafkaController) 23:16:37 kafka | [2024-04-10 23:14:10,963] TRACE [Controller id=1] Checking need to trigger auto leader balancing (kafka.controller.KafkaController) 23:16:37 kafka | [2024-04-10 23:14:39,934] INFO Creating topic policy-pdp-pap with configuration {} and initial partition assignment HashMap(0 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) 23:16:37 kafka | [2024-04-10 23:14:39,948] DEBUG [Controller id=1] There is no producerId block yet (Zk path version 0), creating the first block (kafka.controller.KafkaController) 23:16:37 kafka | [2024-04-10 23:14:39,946] INFO Creating topic __consumer_offsets with configuration {compression.type=producer, cleanup.policy=compact, segment.bytes=104857600} and initial partition assignment HashMap(0 -> ArrayBuffer(1), 1 -> ArrayBuffer(1), 2 -> ArrayBuffer(1), 3 -> ArrayBuffer(1), 4 -> ArrayBuffer(1), 5 -> ArrayBuffer(1), 6 -> ArrayBuffer(1), 7 -> ArrayBuffer(1), 8 -> ArrayBuffer(1), 9 -> ArrayBuffer(1), 10 -> ArrayBuffer(1), 11 -> ArrayBuffer(1), 12 -> ArrayBuffer(1), 13 -> ArrayBuffer(1), 14 -> ArrayBuffer(1), 15 -> ArrayBuffer(1), 16 -> ArrayBuffer(1), 17 -> ArrayBuffer(1), 18 -> ArrayBuffer(1), 19 -> ArrayBuffer(1), 20 -> ArrayBuffer(1), 21 -> ArrayBuffer(1), 22 -> ArrayBuffer(1), 23 -> ArrayBuffer(1), 24 -> ArrayBuffer(1), 25 -> ArrayBuffer(1), 26 -> ArrayBuffer(1), 27 -> ArrayBuffer(1), 28 -> ArrayBuffer(1), 29 -> ArrayBuffer(1), 30 -> ArrayBuffer(1), 31 -> ArrayBuffer(1), 32 -> ArrayBuffer(1), 33 -> ArrayBuffer(1), 34 -> ArrayBuffer(1), 35 -> ArrayBuffer(1), 36 -> ArrayBuffer(1), 37 -> ArrayBuffer(1), 38 -> ArrayBuffer(1), 39 -> ArrayBuffer(1), 40 -> ArrayBuffer(1), 41 -> ArrayBuffer(1), 42 -> ArrayBuffer(1), 43 -> ArrayBuffer(1), 44 -> ArrayBuffer(1), 45 -> ArrayBuffer(1), 46 -> ArrayBuffer(1), 47 -> ArrayBuffer(1), 48 -> ArrayBuffer(1), 49 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) 23:16:37 kafka | [2024-04-10 23:14:39,954] INFO [Controller id=1] Acquired new producerId block ProducerIdsBlock(assignedBrokerId=1, firstProducerId=0, size=1000) by writing to Zk with path version 1 (kafka.controller.KafkaController) 23:16:37 policy-apex-pdp | [2024-04-10T23:15:01.784+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:37 policy-apex-pdp | {"source":"pap-f0ecc202-f082-45c7-b7f8-f2f10d3ef31a","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"be191d72-cf37-4b38-8bd3-f09869418d7b","timestampMs":1712790901636,"name":"apex-89e6e63c-e6e7-4365-a78f-3939e2d58a8b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:37 policy-apex-pdp | [2024-04-10T23:15:01.786+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] 23:16:37 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"be191d72-cf37-4b38-8bd3-f09869418d7b","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"888e43f0-64dd-400e-978d-6aff9547a801","timestampMs":1712790901786,"name":"apex-89e6e63c-e6e7-4365-a78f-3939e2d58a8b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:37 policy-apex-pdp | [2024-04-10T23:15:01.801+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:37 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"be191d72-cf37-4b38-8bd3-f09869418d7b","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"888e43f0-64dd-400e-978d-6aff9547a801","timestampMs":1712790901786,"name":"apex-89e6e63c-e6e7-4365-a78f-3939e2d58a8b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:37 policy-apex-pdp | [2024-04-10T23:15:01.802+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 23:16:37 policy-apex-pdp | [2024-04-10T23:15:01.833+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:37 policy-apex-pdp | {"source":"pap-f0ecc202-f082-45c7-b7f8-f2f10d3ef31a","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"472c3b39-181b-4e95-9577-e8759534697c","timestampMs":1712790901807,"name":"apex-89e6e63c-e6e7-4365-a78f-3939e2d58a8b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:37 policy-apex-pdp | [2024-04-10T23:15:01.835+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] 23:16:37 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"472c3b39-181b-4e95-9577-e8759534697c","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"7f9ad840-c145-4eb1-87ea-e490e44946de","timestampMs":1712790901834,"name":"apex-89e6e63c-e6e7-4365-a78f-3939e2d58a8b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:37 policy-apex-pdp | [2024-04-10T23:15:01.848+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:37 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"472c3b39-181b-4e95-9577-e8759534697c","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"7f9ad840-c145-4eb1-87ea-e490e44946de","timestampMs":1712790901834,"name":"apex-89e6e63c-e6e7-4365-a78f-3939e2d58a8b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:37 policy-apex-pdp | [2024-04-10T23:15:01.848+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 23:16:37 policy-apex-pdp | [2024-04-10T23:15:56.079+00:00|INFO|RequestLog|qtp1068445309-28] 172.17.0.3 - policyadmin [10/Apr/2024:23:15:56 +0000] "GET /metrics HTTP/1.1" 200 10653 "-" "Prometheus/2.51.1" 23:16:37 kafka | [2024-04-10 23:14:40,002] INFO [Controller id=1] New topics: [Set(policy-pdp-pap, __consumer_offsets)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(policy-pdp-pap,Some(qAknXW8rRl-lTJen2kDk1Q),Map(policy-pdp-pap-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))), TopicIdReplicaAssignment(__consumer_offsets,Some(d-olkgqFQhOA7vPVx66rWg),HashMap(__consumer_offsets-22 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-30 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-25 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-35 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-37 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-38 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-13 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-8 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-21 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-4 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-27 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-7 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-9 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-46 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-41 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-33 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-23 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-49 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-47 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-16 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-28 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-31 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-36 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-42 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-3 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-18 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-15 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-24 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-17 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-48 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-19 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-11 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-2 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-43 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-6 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-14 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-20 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-44 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-39 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-12 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-45 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-1 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-5 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-26 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-29 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-34 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-10 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-32 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-40 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) 23:16:37 kafka | [2024-04-10 23:14:40,003] INFO [Controller id=1] New partition creation callback for __consumer_offsets-22,__consumer_offsets-30,__consumer_offsets-25,__consumer_offsets-35,__consumer_offsets-38,__consumer_offsets-13,__consumer_offsets-8,__consumer_offsets-21,__consumer_offsets-4,__consumer_offsets-27,__consumer_offsets-7,__consumer_offsets-9,__consumer_offsets-46,__consumer_offsets-41,__consumer_offsets-33,__consumer_offsets-23,__consumer_offsets-49,__consumer_offsets-47,__consumer_offsets-16,__consumer_offsets-28,__consumer_offsets-31,__consumer_offsets-36,__consumer_offsets-42,__consumer_offsets-3,__consumer_offsets-18,__consumer_offsets-37,policy-pdp-pap-0,__consumer_offsets-15,__consumer_offsets-24,__consumer_offsets-17,__consumer_offsets-48,__consumer_offsets-19,__consumer_offsets-11,__consumer_offsets-2,__consumer_offsets-43,__consumer_offsets-6,__consumer_offsets-14,__consumer_offsets-20,__consumer_offsets-0,__consumer_offsets-44,__consumer_offsets-39,__consumer_offsets-12,__consumer_offsets-45,__consumer_offsets-1,__consumer_offsets-5,__consumer_offsets-26,__consumer_offsets-29,__consumer_offsets-34,__consumer_offsets-10,__consumer_offsets-32,__consumer_offsets-40 (kafka.controller.KafkaController) 23:16:37 kafka | [2024-04-10 23:14:40,006] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:37 kafka | [2024-04-10 23:14:40,006] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:37 kafka | [2024-04-10 23:14:40,006] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:37 kafka | [2024-04-10 23:14:40,006] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:37 kafka | [2024-04-10 23:14:40,006] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:37 kafka | [2024-04-10 23:14:40,006] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:37 kafka | [2024-04-10 23:14:40,006] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:37 kafka | [2024-04-10 23:14:40,006] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:37 kafka | [2024-04-10 23:14:40,006] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:37 kafka | [2024-04-10 23:14:40,007] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:37 kafka | [2024-04-10 23:14:40,007] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | DROP TABLE IF EXISTS toscaproperty 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | > upgrade 0160-jpapolicyaudit_pk.sql 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | ALTER TABLE jpapolicyaudit DROP PRIMARY KEY 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | ALTER TABLE jpapolicyaudit ADD CONSTRAINT PK_JPAPOLICYAUDIT PRIMARY KEY (ID) 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | > upgrade 0170-pdpstatistics_pk.sql 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | ALTER TABLE pdpstatistics DROP PRIMARY KEY 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | ALTER TABLE pdpstatistics ADD CONSTRAINT PK_PDPSTATISTICS PRIMARY KEY (ID) 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | > upgrade 0180-jpatoscanodetemplate_metadata.sql 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | ALTER TABLE jpatoscanodetemplate_metadata MODIFY COLUMN METADATA LONGTEXT 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | > upgrade 0100-upgrade.sql 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | select 'upgrade to 1100 completed' as msg 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | msg 23:16:37 policy-db-migrator | upgrade to 1100 completed 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | > upgrade 0100-jpapolicyaudit_renameuser.sql 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | ALTER TABLE jpapolicyaudit RENAME COLUMN USER TO USERNAME 23:16:37 policy-db-migrator | -------------- 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.792576264Z level=info msg="Migration successfully executed" id="add index to uniquify (record_key, record_type, org_id) columns" duration=1.223801ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.796326627Z level=info msg="Executing migration" id="create alert_image table" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.79725147Z level=info msg="Migration successfully executed" id="create alert_image table" duration=925.172µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.802555772Z level=info msg="Executing migration" id="add unique index on token to alert_image table" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.803568518Z level=info msg="Migration successfully executed" id="add unique index on token to alert_image table" duration=1.012566ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.807548226Z level=info msg="Executing migration" id="support longer URLs in alert_image table" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.807608098Z level=info msg="Migration successfully executed" id="support longer URLs in alert_image table" duration=60.562µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.813559776Z level=info msg="Executing migration" id=create_alert_configuration_history_table 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.815900524Z level=info msg="Migration successfully executed" id=create_alert_configuration_history_table duration=2.339868ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.821992606Z level=info msg="Executing migration" id="drop non-unique orgID index on alert_configuration" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.823047562Z level=info msg="Migration successfully executed" id="drop non-unique orgID index on alert_configuration" duration=1.056916ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.826846137Z level=info msg="Executing migration" id="drop unique orgID index on alert_configuration if exists" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.827280848Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop unique orgID index on alert_configuration if exists" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.83099609Z level=info msg="Executing migration" id="extract alertmanager configuration history to separate table" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.831460172Z level=info msg="Migration successfully executed" id="extract alertmanager configuration history to separate table" duration=463.852µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.837332569Z level=info msg="Executing migration" id="add unique index on orgID to alert_configuration" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.838975009Z level=info msg="Migration successfully executed" id="add unique index on orgID to alert_configuration" duration=1.6392ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.843477202Z level=info msg="Executing migration" id="add last_applied column to alert_configuration_history" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.855948582Z level=info msg="Migration successfully executed" id="add last_applied column to alert_configuration_history" duration=12.47341ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.860585858Z level=info msg="Executing migration" id="create library_element table v1" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.861417408Z level=info msg="Migration successfully executed" id="create library_element table v1" duration=831.34µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.868141106Z level=info msg="Executing migration" id="add index library_element org_id-folder_id-name-kind" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.869305075Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_id-name-kind" duration=1.163529ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.874779541Z level=info msg="Executing migration" id="create library_element_connection table v1" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.876229157Z level=info msg="Migration successfully executed" id="create library_element_connection table v1" duration=1.448966ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.880797261Z level=info msg="Executing migration" id="add index library_element_connection element_id-kind-connection_id" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.881851397Z level=info msg="Migration successfully executed" id="add index library_element_connection element_id-kind-connection_id" duration=1.053806ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.893161959Z level=info msg="Executing migration" id="add unique index library_element org_id_uid" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.894341169Z level=info msg="Migration successfully executed" id="add unique index library_element org_id_uid" duration=1.1786ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.900381079Z level=info msg="Executing migration" id="increase max description length to 2048" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.900452961Z level=info msg="Migration successfully executed" id="increase max description length to 2048" duration=73.712µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.918032138Z level=info msg="Executing migration" id="alter library_element model to mediumtext" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.918131201Z level=info msg="Migration successfully executed" id="alter library_element model to mediumtext" duration=100.993µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.92450372Z level=info msg="Executing migration" id="clone move dashboard alerts to unified alerting" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.925480154Z level=info msg="Migration successfully executed" id="clone move dashboard alerts to unified alerting" duration=975.584µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.929491864Z level=info msg="Executing migration" id="create data_keys table" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.931851673Z level=info msg="Migration successfully executed" id="create data_keys table" duration=2.358769ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.938889699Z level=info msg="Executing migration" id="create secrets table" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.940404296Z level=info msg="Migration successfully executed" id="create secrets table" duration=1.513817ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.946435617Z level=info msg="Executing migration" id="rename data_keys name column to id" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.977312176Z level=info msg="Migration successfully executed" id="rename data_keys name column to id" duration=30.877229ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.984844283Z level=info msg="Executing migration" id="add name column into data_keys" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.990167976Z level=info msg="Migration successfully executed" id="add name column into data_keys" duration=5.322573ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.993700514Z level=info msg="Executing migration" id="copy data_keys id column values into name" 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | > upgrade 0110-idx_tsidx1.sql 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | DROP INDEX IDX_TSIDX1 ON pdpstatistics 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | CREATE INDEX IDXTSIDX1 ON pdpstatistics(timeStamp, name, version) 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | > upgrade 0120-audit_sequence.sql 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | CREATE TABLE IF NOT EXISTS audit_sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | INSERT INTO audit_sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM jpapolicyaudit)) 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | > upgrade 0130-statistics_sequence.sql 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | CREATE TABLE IF NOT EXISTS statistics_sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | INSERT INTO statistics_sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM pdpstatistics)) 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | TRUNCATE TABLE sequence 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | > upgrade 0100-pdpstatistics.sql 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | DROP INDEX IDXTSIDX1 ON pdpstatistics 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | DROP TABLE pdpstatistics 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | DROP TABLE jpapdpstatistics_enginestats 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | > upgrade 0120-statistics_sequence.sql 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | DROP TABLE statistics_sequence 23:16:37 policy-db-migrator | -------------- 23:16:37 policy-db-migrator | 23:16:37 policy-db-migrator | policyadmin: OK: upgrade (1300) 23:16:37 policy-db-migrator | name version 23:16:37 policy-db-migrator | policyadmin 1300 23:16:37 policy-db-migrator | ID script operation from_version to_version tag success atTime 23:16:37 policy-db-migrator | 1 0100-jpapdpgroup_properties.sql upgrade 0 0800 1004242314070800u 1 2024-04-10 23:14:07 23:16:37 policy-db-migrator | 2 0110-jpapdpstatistics_enginestats.sql upgrade 0 0800 1004242314070800u 1 2024-04-10 23:14:07 23:16:37 policy-db-migrator | 3 0120-jpapdpsubgroup_policies.sql upgrade 0 0800 1004242314070800u 1 2024-04-10 23:14:07 23:16:37 policy-db-migrator | 4 0130-jpapdpsubgroup_properties.sql upgrade 0 0800 1004242314070800u 1 2024-04-10 23:14:07 23:16:37 policy-db-migrator | 5 0140-jpapdpsubgroup_supportedpolicytypes.sql upgrade 0 0800 1004242314070800u 1 2024-04-10 23:14:07 23:16:37 policy-db-migrator | 6 0150-jpatoscacapabilityassignment_attributes.sql upgrade 0 0800 1004242314070800u 1 2024-04-10 23:14:07 23:16:37 policy-db-migrator | 7 0160-jpatoscacapabilityassignment_metadata.sql upgrade 0 0800 1004242314070800u 1 2024-04-10 23:14:07 23:16:37 policy-db-migrator | 8 0170-jpatoscacapabilityassignment_occurrences.sql upgrade 0 0800 1004242314070800u 1 2024-04-10 23:14:07 23:16:37 policy-db-migrator | 9 0180-jpatoscacapabilityassignment_properties.sql upgrade 0 0800 1004242314070800u 1 2024-04-10 23:14:07 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.993877358Z level=info msg="Migration successfully executed" id="copy data_keys id column values into name" duration=175.904µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:07.997505169Z level=info msg="Executing migration" id="rename data_keys name column to label" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:08.031773343Z level=info msg="Migration successfully executed" id="rename data_keys name column to label" duration=34.263973ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:08.036621684Z level=info msg="Executing migration" id="rename data_keys id column back to name" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:08.06618995Z level=info msg="Migration successfully executed" id="rename data_keys id column back to name" duration=29.568826ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:08.069620685Z level=info msg="Executing migration" id="create kv_store table v1" 23:16:37 policy-pap | client.id = consumer-policy-pap-2 23:16:37 policy-pap | client.rack = 23:16:37 policy-pap | connections.max.idle.ms = 540000 23:16:37 policy-pap | default.api.timeout.ms = 60000 23:16:37 policy-pap | enable.auto.commit = true 23:16:37 policy-pap | exclude.internal.topics = true 23:16:37 policy-pap | fetch.max.bytes = 52428800 23:16:37 policy-pap | fetch.max.wait.ms = 500 23:16:37 policy-pap | fetch.min.bytes = 1 23:16:37 policy-pap | group.id = policy-pap 23:16:37 policy-pap | group.instance.id = null 23:16:37 policy-pap | heartbeat.interval.ms = 3000 23:16:37 policy-pap | interceptor.classes = [] 23:16:37 policy-pap | internal.leave.group.on.close = true 23:16:37 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 23:16:37 policy-pap | isolation.level = read_uncommitted 23:16:37 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:37 policy-pap | max.partition.fetch.bytes = 1048576 23:16:37 policy-pap | max.poll.interval.ms = 300000 23:16:37 policy-pap | max.poll.records = 500 23:16:37 policy-pap | metadata.max.age.ms = 300000 23:16:37 policy-pap | metric.reporters = [] 23:16:37 policy-pap | metrics.num.samples = 2 23:16:37 policy-pap | metrics.recording.level = INFO 23:16:37 policy-pap | metrics.sample.window.ms = 30000 23:16:37 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 23:16:37 policy-pap | receive.buffer.bytes = 65536 23:16:37 policy-pap | reconnect.backoff.max.ms = 1000 23:16:37 policy-pap | reconnect.backoff.ms = 50 23:16:37 policy-pap | request.timeout.ms = 30000 23:16:37 policy-pap | retry.backoff.ms = 100 23:16:37 policy-pap | sasl.client.callback.handler.class = null 23:16:37 policy-pap | sasl.jaas.config = null 23:16:37 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:37 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 23:16:37 policy-pap | sasl.kerberos.service.name = null 23:16:37 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:37 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:37 policy-pap | sasl.login.callback.handler.class = null 23:16:37 policy-pap | sasl.login.class = null 23:16:37 policy-pap | sasl.login.connect.timeout.ms = null 23:16:37 policy-pap | sasl.login.read.timeout.ms = null 23:16:37 policy-pap | sasl.login.refresh.buffer.seconds = 300 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:08.070407375Z level=info msg="Migration successfully executed" id="create kv_store table v1" duration=786.15µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:08.074653401Z level=info msg="Executing migration" id="add index kv_store.org_id-namespace-key" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:08.075455231Z level=info msg="Migration successfully executed" id="add index kv_store.org_id-namespace-key" duration=801.52µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:08.08185747Z level=info msg="Executing migration" id="update dashboard_uid and panel_id from existing annotations" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:08.082304341Z level=info msg="Migration successfully executed" id="update dashboard_uid and panel_id from existing annotations" duration=446.531µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:08.08629303Z level=info msg="Executing migration" id="create permission table" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:08.087343257Z level=info msg="Migration successfully executed" id="create permission table" duration=1.049227ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:08.091519381Z level=info msg="Executing migration" id="add unique index permission.role_id" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:08.093158002Z level=info msg="Migration successfully executed" id="add unique index permission.role_id" duration=1.637181ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:08.103273043Z level=info msg="Executing migration" id="add unique index role_id_action_scope" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:08.105025207Z level=info msg="Migration successfully executed" id="add unique index role_id_action_scope" duration=1.748754ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:08.109906129Z level=info msg="Executing migration" id="create role table" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:08.110848182Z level=info msg="Migration successfully executed" id="create role table" duration=939.273µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:08.114795851Z level=info msg="Executing migration" id="add column display_name" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:08.122217226Z level=info msg="Migration successfully executed" id="add column display_name" duration=7.418974ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:08.126770139Z level=info msg="Executing migration" id="add column group_name" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:08.134163204Z level=info msg="Migration successfully executed" id="add column group_name" duration=7.392395ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:08.140561363Z level=info msg="Executing migration" id="add index role.org_id" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:08.141637959Z level=info msg="Migration successfully executed" id="add index role.org_id" duration=1.076386ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:08.147247929Z level=info msg="Executing migration" id="add unique index role_org_id_name" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:08.148747667Z level=info msg="Migration successfully executed" id="add unique index role_org_id_name" duration=1.497638ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:08.153798853Z level=info msg="Executing migration" id="add index role_org_id_uid" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:08.1557161Z level=info msg="Migration successfully executed" id="add index role_org_id_uid" duration=1.916017ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:08.161078374Z level=info msg="Executing migration" id="create team role table" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:08.161984526Z level=info msg="Migration successfully executed" id="create team role table" duration=905.902µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:08.166294384Z level=info msg="Executing migration" id="add index team_role.org_id" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:08.168023447Z level=info msg="Migration successfully executed" id="add index team_role.org_id" duration=1.729243ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:08.173952595Z level=info msg="Executing migration" id="add unique index team_role_org_id_team_id_role_id" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:08.175134234Z level=info msg="Migration successfully executed" id="add unique index team_role_org_id_team_id_role_id" duration=1.181499ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:08.182519228Z level=info msg="Executing migration" id="add index team_role.team_id" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:08.184246851Z level=info msg="Migration successfully executed" id="add index team_role.team_id" duration=1.725113ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:08.190743093Z level=info msg="Executing migration" id="create user role table" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:08.191648716Z level=info msg="Migration successfully executed" id="create user role table" duration=905.012µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:08.196972148Z level=info msg="Executing migration" id="add index user_role.org_id" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:08.198023584Z level=info msg="Migration successfully executed" id="add index user_role.org_id" duration=1.051216ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:08.203358407Z level=info msg="Executing migration" id="add unique index user_role_org_id_user_id_role_id" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:08.204510566Z level=info msg="Migration successfully executed" id="add unique index user_role_org_id_user_id_role_id" duration=1.151909ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:08.210591308Z level=info msg="Executing migration" id="add index user_role.user_id" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:08.212533495Z level=info msg="Migration successfully executed" id="add index user_role.user_id" duration=1.938277ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:08.21869037Z level=info msg="Executing migration" id="create builtin role table" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:08.219723685Z level=info msg="Migration successfully executed" id="create builtin role table" duration=1.030905ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:08.247804975Z level=info msg="Executing migration" id="add index builtin_role.role_id" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:08.24963328Z level=info msg="Migration successfully executed" id="add index builtin_role.role_id" duration=1.828286ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:08.254991544Z level=info msg="Executing migration" id="add index builtin_role.name" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:08.256079011Z level=info msg="Migration successfully executed" id="add index builtin_role.name" duration=1.087017ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:08.260978603Z level=info msg="Executing migration" id="Add column org_id to builtin_role table" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:08.271493885Z level=info msg="Migration successfully executed" id="Add column org_id to builtin_role table" duration=10.516462ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:08.275819692Z level=info msg="Executing migration" id="add index builtin_role.org_id" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:08.276653393Z level=info msg="Migration successfully executed" id="add index builtin_role.org_id" duration=833.311µs 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:08.283407972Z level=info msg="Executing migration" id="add unique index builtin_role_org_id_role_id_role" 23:16:37 policy-pap | sasl.login.refresh.min.period.seconds = 60 23:16:37 policy-pap | sasl.login.refresh.window.factor = 0.8 23:16:37 policy-pap | sasl.login.refresh.window.jitter = 0.05 23:16:37 policy-pap | sasl.login.retry.backoff.max.ms = 10000 23:16:37 policy-pap | sasl.login.retry.backoff.ms = 100 23:16:37 policy-pap | sasl.mechanism = GSSAPI 23:16:37 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 23:16:37 policy-pap | sasl.oauthbearer.expected.audience = null 23:16:37 policy-pap | sasl.oauthbearer.expected.issuer = null 23:16:37 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:37 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:37 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:37 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 23:16:37 policy-pap | sasl.oauthbearer.scope.claim.name = scope 23:16:37 policy-pap | sasl.oauthbearer.sub.claim.name = sub 23:16:37 policy-pap | sasl.oauthbearer.token.endpoint.url = null 23:16:37 policy-pap | security.protocol = PLAINTEXT 23:16:37 policy-pap | security.providers = null 23:16:37 policy-pap | send.buffer.bytes = 131072 23:16:37 policy-pap | session.timeout.ms = 45000 23:16:37 policy-pap | socket.connection.setup.timeout.max.ms = 30000 23:16:37 policy-pap | socket.connection.setup.timeout.ms = 10000 23:16:37 policy-pap | ssl.cipher.suites = null 23:16:37 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:37 policy-pap | ssl.endpoint.identification.algorithm = https 23:16:37 policy-pap | ssl.engine.factory.class = null 23:16:37 policy-pap | ssl.key.password = null 23:16:37 policy-pap | ssl.keymanager.algorithm = SunX509 23:16:37 policy-pap | ssl.keystore.certificate.chain = null 23:16:37 policy-pap | ssl.keystore.key = null 23:16:37 policy-pap | ssl.keystore.location = null 23:16:37 policy-pap | ssl.keystore.password = null 23:16:37 policy-pap | ssl.keystore.type = JKS 23:16:37 policy-pap | ssl.protocol = TLSv1.3 23:16:37 policy-pap | ssl.provider = null 23:16:37 policy-pap | ssl.secure.random.implementation = null 23:16:37 policy-pap | ssl.trustmanager.algorithm = PKIX 23:16:37 policy-pap | ssl.truststore.certificates = null 23:16:37 policy-pap | ssl.truststore.location = null 23:16:37 policy-pap | ssl.truststore.password = null 23:16:37 policy-pap | ssl.truststore.type = JKS 23:16:37 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:37 policy-pap | 23:16:37 policy-pap | [2024-04-10T23:14:37.435+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 23:16:37 policy-pap | [2024-04-10T23:14:37.435+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 23:16:37 policy-pap | [2024-04-10T23:14:37.435+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1712790877435 23:16:37 policy-pap | [2024-04-10T23:14:37.435+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-policy-pap-2, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap 23:16:37 policy-pap | [2024-04-10T23:14:37.852+00:00|INFO|PapDatabaseInitializer|main] Created initial pdpGroup in DB - PdpGroups(groups=[PdpGroup(name=defaultGroup, description=The default group that registers all supported policy types and pdps., pdpGroupState=ACTIVE, properties=null, pdpSubgroups=[PdpSubGroup(pdpType=apex, supportedPolicyTypes=[onap.policies.controlloop.operational.common.Apex 1.0.0, onap.policies.native.Apex 1.0.0], policies=[], currentInstanceCount=0, desiredInstanceCount=1, properties=null, pdpInstances=null)])]) from /opt/app/policy/pap/etc/mounted/groups.json 23:16:37 policy-pap | [2024-04-10T23:14:38.004+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning 23:16:37 policy-pap | [2024-04-10T23:14:38.249+00:00|INFO|DefaultSecurityFilterChain|main] Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@53917c92, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@1fa796a4, org.springframework.security.web.context.SecurityContextHolderFilter@1f013047, org.springframework.security.web.header.HeaderWriterFilter@ce0bbd5, org.springframework.security.web.authentication.logout.LogoutFilter@44c2e8a8, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@4fbbd98c, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@51566ce0, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@17e6d07b, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@68de8522, org.springframework.security.web.access.ExceptionTranslationFilter@1f7557fe, org.springframework.security.web.access.intercept.AuthorizationFilter@3879feec] 23:16:37 policy-pap | [2024-04-10T23:14:39.167+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoint(s) beneath base path '' 23:16:37 policy-pap | [2024-04-10T23:14:39.276+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] 23:16:37 policy-pap | [2024-04-10T23:14:39.312+00:00|INFO|TomcatWebServer|main] Tomcat started on port(s): 6969 (http) with context path '/policy/pap/v1' 23:16:37 policy-pap | [2024-04-10T23:14:39.333+00:00|INFO|ServiceManager|main] Policy PAP starting 23:16:37 policy-pap | [2024-04-10T23:14:39.333+00:00|INFO|ServiceManager|main] Policy PAP starting Meter Registry 23:16:37 policy-pap | [2024-04-10T23:14:39.334+00:00|INFO|ServiceManager|main] Policy PAP starting PAP parameters 23:16:37 policy-pap | [2024-04-10T23:14:39.335+00:00|INFO|ServiceManager|main] Policy PAP starting Pdp Heartbeat Listener 23:16:37 policy-pap | [2024-04-10T23:14:39.335+00:00|INFO|ServiceManager|main] Policy PAP starting Response Request ID Dispatcher 23:16:37 policy-pap | [2024-04-10T23:14:39.335+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Request ID Dispatcher 23:16:37 policy-pap | [2024-04-10T23:14:39.335+00:00|INFO|ServiceManager|main] Policy PAP starting Response Message Dispatcher 23:16:37 policy-pap | [2024-04-10T23:14:39.340+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=9f4a6b38-834c-48e5-bf2a-977246f9eaf0, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@3ff3275b 23:16:37 policy-pap | [2024-04-10T23:14:39.351+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=9f4a6b38-834c-48e5-bf2a-977246f9eaf0, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 23:16:37 policy-pap | [2024-04-10T23:14:39.352+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 23:16:37 policy-pap | allow.auto.create.topics = true 23:16:37 policy-pap | auto.commit.interval.ms = 5000 23:16:37 policy-pap | auto.include.jmx.reporter = true 23:16:37 policy-pap | auto.offset.reset = latest 23:16:37 policy-db-migrator | 10 0190-jpatoscacapabilitytype_metadata.sql upgrade 0 0800 1004242314070800u 1 2024-04-10 23:14:07 23:16:37 policy-db-migrator | 11 0200-jpatoscacapabilitytype_properties.sql upgrade 0 0800 1004242314070800u 1 2024-04-10 23:14:07 23:16:37 policy-db-migrator | 12 0210-jpatoscadatatype_constraints.sql upgrade 0 0800 1004242314070800u 1 2024-04-10 23:14:07 23:16:37 policy-db-migrator | 13 0220-jpatoscadatatype_metadata.sql upgrade 0 0800 1004242314070800u 1 2024-04-10 23:14:07 23:16:37 policy-db-migrator | 14 0230-jpatoscadatatype_properties.sql upgrade 0 0800 1004242314070800u 1 2024-04-10 23:14:07 23:16:37 policy-db-migrator | 15 0240-jpatoscanodetemplate_metadata.sql upgrade 0 0800 1004242314070800u 1 2024-04-10 23:14:07 23:16:37 policy-db-migrator | 16 0250-jpatoscanodetemplate_properties.sql upgrade 0 0800 1004242314070800u 1 2024-04-10 23:14:07 23:16:37 policy-db-migrator | 17 0260-jpatoscanodetype_metadata.sql upgrade 0 0800 1004242314070800u 1 2024-04-10 23:14:07 23:16:37 policy-db-migrator | 18 0270-jpatoscanodetype_properties.sql upgrade 0 0800 1004242314070800u 1 2024-04-10 23:14:07 23:16:37 policy-db-migrator | 19 0280-jpatoscapolicy_metadata.sql upgrade 0 0800 1004242314070800u 1 2024-04-10 23:14:07 23:16:37 policy-db-migrator | 20 0290-jpatoscapolicy_properties.sql upgrade 0 0800 1004242314070800u 1 2024-04-10 23:14:07 23:16:37 policy-db-migrator | 21 0300-jpatoscapolicy_targets.sql upgrade 0 0800 1004242314070800u 1 2024-04-10 23:14:07 23:16:37 policy-db-migrator | 22 0310-jpatoscapolicytype_metadata.sql upgrade 0 0800 1004242314070800u 1 2024-04-10 23:14:07 23:16:37 policy-db-migrator | 23 0320-jpatoscapolicytype_properties.sql upgrade 0 0800 1004242314070800u 1 2024-04-10 23:14:08 23:16:37 policy-db-migrator | 24 0330-jpatoscapolicytype_targets.sql upgrade 0 0800 1004242314070800u 1 2024-04-10 23:14:08 23:16:37 policy-db-migrator | 25 0340-jpatoscapolicytype_triggers.sql upgrade 0 0800 1004242314070800u 1 2024-04-10 23:14:08 23:16:37 policy-db-migrator | 26 0350-jpatoscaproperty_constraints.sql upgrade 0 0800 1004242314070800u 1 2024-04-10 23:14:08 23:16:37 policy-db-migrator | 27 0360-jpatoscaproperty_metadata.sql upgrade 0 0800 1004242314070800u 1 2024-04-10 23:14:08 23:16:37 policy-db-migrator | 28 0370-jpatoscarelationshiptype_metadata.sql upgrade 0 0800 1004242314070800u 1 2024-04-10 23:14:08 23:16:37 policy-db-migrator | 29 0380-jpatoscarelationshiptype_properties.sql upgrade 0 0800 1004242314070800u 1 2024-04-10 23:14:08 23:16:37 policy-db-migrator | 30 0390-jpatoscarequirement_metadata.sql upgrade 0 0800 1004242314070800u 1 2024-04-10 23:14:08 23:16:37 policy-db-migrator | 31 0400-jpatoscarequirement_occurrences.sql upgrade 0 0800 1004242314070800u 1 2024-04-10 23:14:08 23:16:37 policy-db-migrator | 32 0410-jpatoscarequirement_properties.sql upgrade 0 0800 1004242314070800u 1 2024-04-10 23:14:08 23:16:37 policy-db-migrator | 33 0420-jpatoscaservicetemplate_metadata.sql upgrade 0 0800 1004242314070800u 1 2024-04-10 23:14:08 23:16:37 policy-db-migrator | 34 0430-jpatoscatopologytemplate_inputs.sql upgrade 0 0800 1004242314070800u 1 2024-04-10 23:14:08 23:16:37 policy-db-migrator | 35 0440-pdpgroup_pdpsubgroup.sql upgrade 0 0800 1004242314070800u 1 2024-04-10 23:14:08 23:16:37 policy-db-migrator | 36 0450-pdpgroup.sql upgrade 0 0800 1004242314070800u 1 2024-04-10 23:14:08 23:16:37 policy-db-migrator | 37 0460-pdppolicystatus.sql upgrade 0 0800 1004242314070800u 1 2024-04-10 23:14:08 23:16:37 policy-db-migrator | 38 0470-pdp.sql upgrade 0 0800 1004242314070800u 1 2024-04-10 23:14:08 23:16:37 policy-db-migrator | 39 0480-pdpstatistics.sql upgrade 0 0800 1004242314070800u 1 2024-04-10 23:14:08 23:16:37 policy-db-migrator | 40 0490-pdpsubgroup_pdp.sql upgrade 0 0800 1004242314070800u 1 2024-04-10 23:14:08 23:16:37 policy-db-migrator | 41 0500-pdpsubgroup.sql upgrade 0 0800 1004242314070800u 1 2024-04-10 23:14:08 23:16:37 policy-db-migrator | 42 0510-toscacapabilityassignment.sql upgrade 0 0800 1004242314070800u 1 2024-04-10 23:14:08 23:16:37 policy-db-migrator | 43 0520-toscacapabilityassignments.sql upgrade 0 0800 1004242314070800u 1 2024-04-10 23:14:08 23:16:37 policy-db-migrator | 44 0530-toscacapabilityassignments_toscacapabilityassignment.sql upgrade 0 0800 1004242314070800u 1 2024-04-10 23:14:08 23:16:37 policy-db-migrator | 45 0540-toscacapabilitytype.sql upgrade 0 0800 1004242314070800u 1 2024-04-10 23:14:08 23:16:37 policy-db-migrator | 46 0550-toscacapabilitytypes.sql upgrade 0 0800 1004242314070800u 1 2024-04-10 23:14:08 23:16:37 policy-db-migrator | 47 0560-toscacapabilitytypes_toscacapabilitytype.sql upgrade 0 0800 1004242314070800u 1 2024-04-10 23:14:08 23:16:37 policy-db-migrator | 48 0570-toscadatatype.sql upgrade 0 0800 1004242314070800u 1 2024-04-10 23:14:09 23:16:37 policy-db-migrator | 49 0580-toscadatatypes.sql upgrade 0 0800 1004242314070800u 1 2024-04-10 23:14:09 23:16:37 policy-db-migrator | 50 0590-toscadatatypes_toscadatatype.sql upgrade 0 0800 1004242314070800u 1 2024-04-10 23:14:09 23:16:37 policy-db-migrator | 51 0600-toscanodetemplate.sql upgrade 0 0800 1004242314070800u 1 2024-04-10 23:14:09 23:16:37 policy-db-migrator | 52 0610-toscanodetemplates.sql upgrade 0 0800 1004242314070800u 1 2024-04-10 23:14:09 23:16:37 policy-db-migrator | 53 0620-toscanodetemplates_toscanodetemplate.sql upgrade 0 0800 1004242314070800u 1 2024-04-10 23:14:09 23:16:37 policy-db-migrator | 54 0630-toscanodetype.sql upgrade 0 0800 1004242314070800u 1 2024-04-10 23:14:09 23:16:37 policy-db-migrator | 55 0640-toscanodetypes.sql upgrade 0 0800 1004242314070800u 1 2024-04-10 23:14:09 23:16:37 policy-db-migrator | 56 0650-toscanodetypes_toscanodetype.sql upgrade 0 0800 1004242314070800u 1 2024-04-10 23:14:09 23:16:37 policy-db-migrator | 57 0660-toscaparameter.sql upgrade 0 0800 1004242314070800u 1 2024-04-10 23:14:09 23:16:37 policy-db-migrator | 58 0670-toscapolicies.sql upgrade 0 0800 1004242314070800u 1 2024-04-10 23:14:09 23:16:37 policy-db-migrator | 59 0680-toscapolicies_toscapolicy.sql upgrade 0 0800 1004242314070800u 1 2024-04-10 23:14:09 23:16:37 policy-db-migrator | 60 0690-toscapolicy.sql upgrade 0 0800 1004242314070800u 1 2024-04-10 23:14:09 23:16:37 policy-db-migrator | 61 0700-toscapolicytype.sql upgrade 0 0800 1004242314070800u 1 2024-04-10 23:14:09 23:16:37 policy-db-migrator | 62 0710-toscapolicytypes.sql upgrade 0 0800 1004242314070800u 1 2024-04-10 23:14:09 23:16:37 policy-db-migrator | 63 0720-toscapolicytypes_toscapolicytype.sql upgrade 0 0800 1004242314070800u 1 2024-04-10 23:14:09 23:16:37 policy-db-migrator | 64 0730-toscaproperty.sql upgrade 0 0800 1004242314070800u 1 2024-04-10 23:14:09 23:16:37 policy-db-migrator | 65 0740-toscarelationshiptype.sql upgrade 0 0800 1004242314070800u 1 2024-04-10 23:14:09 23:16:37 policy-db-migrator | 66 0750-toscarelationshiptypes.sql upgrade 0 0800 1004242314070800u 1 2024-04-10 23:14:09 23:16:37 policy-db-migrator | 67 0760-toscarelationshiptypes_toscarelationshiptype.sql upgrade 0 0800 1004242314070800u 1 2024-04-10 23:14:09 23:16:37 policy-db-migrator | 68 0770-toscarequirement.sql upgrade 0 0800 1004242314070800u 1 2024-04-10 23:14:09 23:16:37 policy-db-migrator | 69 0780-toscarequirements.sql upgrade 0 0800 1004242314070800u 1 2024-04-10 23:14:09 23:16:37 policy-db-migrator | 70 0790-toscarequirements_toscarequirement.sql upgrade 0 0800 1004242314070800u 1 2024-04-10 23:14:09 23:16:37 policy-db-migrator | 71 0800-toscaservicetemplate.sql upgrade 0 0800 1004242314070800u 1 2024-04-10 23:14:09 23:16:37 policy-db-migrator | 72 0810-toscatopologytemplate.sql upgrade 0 0800 1004242314070800u 1 2024-04-10 23:14:09 23:16:37 policy-db-migrator | 73 0820-toscatrigger.sql upgrade 0 0800 1004242314070800u 1 2024-04-10 23:14:10 23:16:37 policy-db-migrator | 74 0830-FK_ToscaNodeTemplate_capabilitiesName.sql upgrade 0 0800 1004242314070800u 1 2024-04-10 23:14:10 23:16:37 policy-db-migrator | 75 0840-FK_ToscaNodeTemplate_requirementsName.sql upgrade 0 0800 1004242314070800u 1 2024-04-10 23:14:10 23:16:37 policy-db-migrator | 76 0850-FK_ToscaNodeType_requirementsName.sql upgrade 0 0800 1004242314070800u 1 2024-04-10 23:14:10 23:16:37 policy-db-migrator | 77 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql upgrade 0 0800 1004242314070800u 1 2024-04-10 23:14:10 23:16:37 policy-db-migrator | 78 0870-FK_ToscaServiceTemplate_dataTypesName.sql upgrade 0 0800 1004242314070800u 1 2024-04-10 23:14:10 23:16:37 policy-db-migrator | 79 0880-FK_ToscaServiceTemplate_nodeTypesName.sql upgrade 0 0800 1004242314070800u 1 2024-04-10 23:14:10 23:16:37 policy-db-migrator | 80 0890-FK_ToscaServiceTemplate_policyTypesName.sql upgrade 0 0800 1004242314070800u 1 2024-04-10 23:14:10 23:16:37 policy-db-migrator | 81 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql upgrade 0 0800 1004242314070800u 1 2024-04-10 23:14:10 23:16:37 policy-db-migrator | 82 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql upgrade 0 0800 1004242314070800u 1 2024-04-10 23:14:10 23:16:37 policy-db-migrator | 83 0920-FK_ToscaTopologyTemplate_policyName.sql upgrade 0 0800 1004242314070800u 1 2024-04-10 23:14:10 23:16:37 policy-db-migrator | 84 0940-PdpPolicyStatus_PdpGroup.sql upgrade 0 0800 1004242314070800u 1 2024-04-10 23:14:10 23:16:37 policy-db-migrator | 85 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql upgrade 0 0800 1004242314070800u 1 2024-04-10 23:14:10 23:16:37 policy-db-migrator | 86 0960-FK_ToscaNodeTemplate_capabilitiesName.sql upgrade 0 0800 1004242314070800u 1 2024-04-10 23:14:10 23:16:37 policy-db-migrator | 87 0970-FK_ToscaNodeTemplate_requirementsName.sql upgrade 0 0800 1004242314070800u 1 2024-04-10 23:14:10 23:16:37 policy-db-migrator | 88 0980-FK_ToscaNodeType_requirementsName.sql upgrade 0 0800 1004242314070800u 1 2024-04-10 23:14:10 23:16:37 policy-db-migrator | 89 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql upgrade 0 0800 1004242314070800u 1 2024-04-10 23:14:10 23:16:37 policy-db-migrator | 90 1000-FK_ToscaServiceTemplate_dataTypesName.sql upgrade 0 0800 1004242314070800u 1 2024-04-10 23:14:10 23:16:37 policy-db-migrator | 91 1010-FK_ToscaServiceTemplate_nodeTypesName.sql upgrade 0 0800 1004242314070800u 1 2024-04-10 23:14:11 23:16:37 policy-db-migrator | 92 1020-FK_ToscaServiceTemplate_policyTypesName.sql upgrade 0 0800 1004242314070800u 1 2024-04-10 23:14:11 23:16:37 policy-db-migrator | 93 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql upgrade 0 0800 1004242314070800u 1 2024-04-10 23:14:11 23:16:37 policy-db-migrator | 94 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql upgrade 0 0800 1004242314070800u 1 2024-04-10 23:14:11 23:16:37 policy-db-migrator | 95 1050-FK_ToscaTopologyTemplate_policyName.sql upgrade 0 0800 1004242314070800u 1 2024-04-10 23:14:11 23:16:37 policy-db-migrator | 96 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql upgrade 0 0800 1004242314070800u 1 2024-04-10 23:14:11 23:16:37 policy-db-migrator | 97 0100-pdp.sql upgrade 0800 0900 1004242314070900u 1 2024-04-10 23:14:11 23:16:37 policy-db-migrator | 98 0110-idx_tsidx1.sql upgrade 0800 0900 1004242314070900u 1 2024-04-10 23:14:11 23:16:37 policy-db-migrator | 99 0120-pk_pdpstatistics.sql upgrade 0800 0900 1004242314070900u 1 2024-04-10 23:14:11 23:16:37 policy-db-migrator | 100 0130-pdpstatistics.sql upgrade 0800 0900 1004242314070900u 1 2024-04-10 23:14:11 23:16:37 policy-db-migrator | 101 0140-pk_pdpstatistics.sql upgrade 0800 0900 1004242314070900u 1 2024-04-10 23:14:11 23:16:37 policy-db-migrator | 102 0150-pdpstatistics.sql upgrade 0800 0900 1004242314070900u 1 2024-04-10 23:14:11 23:16:37 policy-db-migrator | 103 0160-jpapdpstatistics_enginestats.sql upgrade 0800 0900 1004242314070900u 1 2024-04-10 23:14:11 23:16:37 policy-db-migrator | 104 0170-jpapdpstatistics_enginestats.sql upgrade 0800 0900 1004242314070900u 1 2024-04-10 23:14:11 23:16:37 policy-db-migrator | 105 0180-jpapdpstatistics_enginestats.sql upgrade 0800 0900 1004242314070900u 1 2024-04-10 23:14:11 23:16:37 policy-db-migrator | 106 0190-jpapolicyaudit.sql upgrade 0800 0900 1004242314070900u 1 2024-04-10 23:14:11 23:16:37 policy-db-migrator | 107 0200-JpaPolicyAuditIndex_timestamp.sql upgrade 0800 0900 1004242314070900u 1 2024-04-10 23:14:11 23:16:37 policy-db-migrator | 108 0210-sequence.sql upgrade 0800 0900 1004242314070900u 1 2024-04-10 23:14:11 23:16:37 policy-db-migrator | 109 0220-sequence.sql upgrade 0800 0900 1004242314070900u 1 2024-04-10 23:14:11 23:16:37 policy-db-migrator | 110 0100-jpatoscapolicy_targets.sql upgrade 0900 1000 1004242314071000u 1 2024-04-10 23:14:11 23:16:37 policy-db-migrator | 111 0110-jpatoscapolicytype_targets.sql upgrade 0900 1000 1004242314071000u 1 2024-04-10 23:14:12 23:16:37 policy-db-migrator | 112 0120-toscatrigger.sql upgrade 0900 1000 1004242314071000u 1 2024-04-10 23:14:12 23:16:37 policy-db-migrator | 113 0130-jpatoscapolicytype_triggers.sql upgrade 0900 1000 1004242314071000u 1 2024-04-10 23:14:12 23:16:37 policy-db-migrator | 114 0140-toscaparameter.sql upgrade 0900 1000 1004242314071000u 1 2024-04-10 23:14:12 23:16:37 policy-db-migrator | 115 0150-toscaproperty.sql upgrade 0900 1000 1004242314071000u 1 2024-04-10 23:14:12 23:16:37 kafka | [2024-04-10 23:14:40,007] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:37 kafka | [2024-04-10 23:14:40,007] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:37 kafka | [2024-04-10 23:14:40,007] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:37 kafka | [2024-04-10 23:14:40,007] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:37 kafka | [2024-04-10 23:14:40,007] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:37 kafka | [2024-04-10 23:14:40,007] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:37 kafka | [2024-04-10 23:14:40,007] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:37 kafka | [2024-04-10 23:14:40,007] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:37 kafka | [2024-04-10 23:14:40,007] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:37 kafka | [2024-04-10 23:14:40,007] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:37 kafka | [2024-04-10 23:14:40,007] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:37 kafka | [2024-04-10 23:14:40,007] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:37 kafka | [2024-04-10 23:14:40,007] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:37 kafka | [2024-04-10 23:14:40,007] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:37 kafka | [2024-04-10 23:14:40,007] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:37 kafka | [2024-04-10 23:14:40,007] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:37 kafka | [2024-04-10 23:14:40,007] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:37 kafka | [2024-04-10 23:14:40,007] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:37 kafka | [2024-04-10 23:14:40,007] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:37 kafka | [2024-04-10 23:14:40,007] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:37 kafka | [2024-04-10 23:14:40,007] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:37 kafka | [2024-04-10 23:14:40,007] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:37 kafka | [2024-04-10 23:14:40,007] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:37 kafka | [2024-04-10 23:14:40,007] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:37 kafka | [2024-04-10 23:14:40,008] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:37 kafka | [2024-04-10 23:14:40,008] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:37 kafka | [2024-04-10 23:14:40,008] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:37 kafka | [2024-04-10 23:14:40,008] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:37 kafka | [2024-04-10 23:14:40,008] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:37 kafka | [2024-04-10 23:14:40,008] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:37 kafka | [2024-04-10 23:14:40,008] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:37 kafka | [2024-04-10 23:14:40,008] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:37 kafka | [2024-04-10 23:14:40,008] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:37 kafka | [2024-04-10 23:14:40,008] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:37 kafka | [2024-04-10 23:14:40,008] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:37 kafka | [2024-04-10 23:14:40,008] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:37 kafka | [2024-04-10 23:14:40,008] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:37 kafka | [2024-04-10 23:14:40,008] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:37 kafka | [2024-04-10 23:14:40,008] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:37 kafka | [2024-04-10 23:14:40,008] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:08.284592551Z level=info msg="Migration successfully executed" id="add unique index builtin_role_org_id_role_id_role" duration=1.184289ms 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:08.290348634Z level=info msg="Executing migration" id="Remove unique index role_org_id_uid" 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:08.292024756Z level=info msg="Migration successfully executed" id="Remove unique index role_org_id_uid" duration=1.676202ms 23:16:37 policy-pap | bootstrap.servers = [kafka:9092] 23:16:37 policy-pap | check.crcs = true 23:16:37 policy-pap | client.dns.lookup = use_all_dns_ips 23:16:37 policy-pap | client.id = consumer-9f4a6b38-834c-48e5-bf2a-977246f9eaf0-3 23:16:37 policy-pap | client.rack = 23:16:37 policy-pap | connections.max.idle.ms = 540000 23:16:37 policy-pap | default.api.timeout.ms = 60000 23:16:37 policy-pap | enable.auto.commit = true 23:16:37 policy-pap | exclude.internal.topics = true 23:16:37 policy-pap | fetch.max.bytes = 52428800 23:16:37 policy-pap | fetch.max.wait.ms = 500 23:16:37 policy-pap | fetch.min.bytes = 1 23:16:37 policy-pap | group.id = 9f4a6b38-834c-48e5-bf2a-977246f9eaf0 23:16:37 policy-pap | group.instance.id = null 23:16:37 policy-pap | heartbeat.interval.ms = 3000 23:16:37 policy-pap | interceptor.classes = [] 23:16:37 policy-pap | internal.leave.group.on.close = true 23:16:37 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 23:16:37 policy-pap | isolation.level = read_uncommitted 23:16:37 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:37 policy-pap | max.partition.fetch.bytes = 1048576 23:16:37 policy-pap | max.poll.interval.ms = 300000 23:16:37 policy-pap | max.poll.records = 500 23:16:37 policy-pap | metadata.max.age.ms = 300000 23:16:37 policy-pap | metric.reporters = [] 23:16:37 policy-pap | metrics.num.samples = 2 23:16:37 policy-pap | metrics.recording.level = INFO 23:16:37 policy-pap | metrics.sample.window.ms = 30000 23:16:37 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 23:16:37 policy-pap | receive.buffer.bytes = 65536 23:16:37 policy-pap | reconnect.backoff.max.ms = 1000 23:16:37 policy-pap | reconnect.backoff.ms = 50 23:16:37 policy-pap | request.timeout.ms = 30000 23:16:37 policy-pap | retry.backoff.ms = 100 23:16:37 policy-pap | sasl.client.callback.handler.class = null 23:16:37 policy-pap | sasl.jaas.config = null 23:16:37 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:37 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 23:16:37 policy-pap | sasl.kerberos.service.name = null 23:16:37 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:37 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:37 policy-pap | sasl.login.callback.handler.class = null 23:16:37 policy-pap | sasl.login.class = null 23:16:37 policy-pap | sasl.login.connect.timeout.ms = null 23:16:37 policy-pap | sasl.login.read.timeout.ms = null 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:08.297322829Z level=info msg="Executing migration" id="add unique index role.uid" 23:16:37 kafka | [2024-04-10 23:14:40,008] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 23:16:37 policy-db-migrator | 116 0160-jpapolicyaudit_pk.sql upgrade 0900 1000 1004242314071000u 1 2024-04-10 23:14:12 23:16:37 policy-pap | sasl.login.refresh.buffer.seconds = 300 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:08.299439201Z level=info msg="Migration successfully executed" id="add unique index role.uid" duration=2.119473ms 23:16:37 kafka | [2024-04-10 23:14:40,021] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NonExistentReplica to NewReplica (state.change.logger) 23:16:37 policy-db-migrator | 117 0170-pdpstatistics_pk.sql upgrade 0900 1000 1004242314071000u 1 2024-04-10 23:14:12 23:16:37 policy-pap | sasl.login.refresh.min.period.seconds = 60 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:08.303849751Z level=info msg="Executing migration" id="create seed assignment table" 23:16:37 kafka | [2024-04-10 23:14:40,021] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NonExistentReplica to NewReplica (state.change.logger) 23:16:37 policy-db-migrator | 118 0180-jpatoscanodetemplate_metadata.sql upgrade 0900 1000 1004242314071000u 1 2024-04-10 23:14:12 23:16:37 policy-pap | sasl.login.refresh.window.factor = 0.8 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:08.304838955Z level=info msg="Migration successfully executed" id="create seed assignment table" duration=988.834µs 23:16:37 kafka | [2024-04-10 23:14:40,021] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NonExistentReplica to NewReplica (state.change.logger) 23:16:37 policy-db-migrator | 119 0100-upgrade.sql upgrade 1000 1100 1004242314071100u 1 2024-04-10 23:14:12 23:16:37 policy-pap | sasl.login.refresh.window.jitter = 0.05 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:08.308590879Z level=info msg="Executing migration" id="add unique index builtin_role_role_name" 23:16:37 kafka | [2024-04-10 23:14:40,021] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NonExistentReplica to NewReplica (state.change.logger) 23:16:37 policy-db-migrator | 120 0100-jpapolicyaudit_renameuser.sql upgrade 1100 1200 1004242314071200u 1 2024-04-10 23:14:12 23:16:37 policy-pap | sasl.login.retry.backoff.max.ms = 10000 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:08.310365983Z level=info msg="Migration successfully executed" id="add unique index builtin_role_role_name" duration=1.773524ms 23:16:37 kafka | [2024-04-10 23:14:40,021] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NonExistentReplica to NewReplica (state.change.logger) 23:16:37 policy-db-migrator | 121 0110-idx_tsidx1.sql upgrade 1100 1200 1004242314071200u 1 2024-04-10 23:14:12 23:16:37 policy-pap | sasl.login.retry.backoff.ms = 100 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:08.315771088Z level=info msg="Executing migration" id="add column hidden to role table" 23:16:37 kafka | [2024-04-10 23:14:40,021] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NonExistentReplica to NewReplica (state.change.logger) 23:16:37 policy-db-migrator | 122 0120-audit_sequence.sql upgrade 1100 1200 1004242314071200u 1 2024-04-10 23:14:12 23:16:37 policy-pap | sasl.mechanism = GSSAPI 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:08.32465321Z level=info msg="Migration successfully executed" id="add column hidden to role table" duration=8.882703ms 23:16:37 kafka | [2024-04-10 23:14:40,021] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NonExistentReplica to NewReplica (state.change.logger) 23:16:37 policy-db-migrator | 123 0130-statistics_sequence.sql upgrade 1100 1200 1004242314071200u 1 2024-04-10 23:14:12 23:16:37 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:08.331572052Z level=info msg="Executing migration" id="permission kind migration" 23:16:37 kafka | [2024-04-10 23:14:40,021] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NonExistentReplica to NewReplica (state.change.logger) 23:16:37 policy-db-migrator | 124 0100-pdpstatistics.sql upgrade 1200 1300 1004242314071300u 1 2024-04-10 23:14:12 23:16:37 policy-pap | sasl.oauthbearer.expected.audience = null 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:08.339430198Z level=info msg="Migration successfully executed" id="permission kind migration" duration=7.856135ms 23:16:37 kafka | [2024-04-10 23:14:40,021] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NonExistentReplica to NewReplica (state.change.logger) 23:16:37 policy-db-migrator | 125 0110-jpapdpstatistics_enginestats.sql upgrade 1200 1300 1004242314071300u 1 2024-04-10 23:14:12 23:16:37 policy-pap | sasl.oauthbearer.expected.issuer = null 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:08.342965455Z level=info msg="Executing migration" id="permission attribute migration" 23:16:37 kafka | [2024-04-10 23:14:40,021] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NonExistentReplica to NewReplica (state.change.logger) 23:16:37 policy-db-migrator | 126 0120-statistics_sequence.sql upgrade 1200 1300 1004242314071300u 1 2024-04-10 23:14:12 23:16:37 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:08.350806141Z level=info msg="Migration successfully executed" id="permission attribute migration" duration=7.840136ms 23:16:37 kafka | [2024-04-10 23:14:40,021] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NonExistentReplica to NewReplica (state.change.logger) 23:16:37 policy-db-migrator | policyadmin: OK @ 1300 23:16:37 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:08.355480198Z level=info msg="Executing migration" id="permission identifier migration" 23:16:37 kafka | [2024-04-10 23:14:40,021] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NonExistentReplica to NewReplica (state.change.logger) 23:16:37 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:08.363397374Z level=info msg="Migration successfully executed" id="permission identifier migration" duration=7.918556ms 23:16:37 kafka | [2024-04-10 23:14:40,021] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NonExistentReplica to NewReplica (state.change.logger) 23:16:37 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:08.370707547Z level=info msg="Executing migration" id="add permission identifier index" 23:16:37 kafka | [2024-04-10 23:14:40,022] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NonExistentReplica to NewReplica (state.change.logger) 23:16:37 policy-pap | sasl.oauthbearer.scope.claim.name = scope 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:08.371622739Z level=info msg="Migration successfully executed" id="add permission identifier index" duration=874.971µs 23:16:37 kafka | [2024-04-10 23:14:40,022] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NonExistentReplica to NewReplica (state.change.logger) 23:16:37 policy-pap | sasl.oauthbearer.sub.claim.name = sub 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:08.376486651Z level=info msg="Executing migration" id="add permission action scope role_id index" 23:16:37 kafka | [2024-04-10 23:14:40,022] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NonExistentReplica to NewReplica (state.change.logger) 23:16:37 policy-pap | sasl.oauthbearer.token.endpoint.url = null 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:08.37846001Z level=info msg="Migration successfully executed" id="add permission action scope role_id index" duration=1.9729ms 23:16:37 kafka | [2024-04-10 23:14:40,022] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NonExistentReplica to NewReplica (state.change.logger) 23:16:37 policy-pap | security.protocol = PLAINTEXT 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:08.383925496Z level=info msg="Executing migration" id="remove permission role_id action scope index" 23:16:37 kafka | [2024-04-10 23:14:40,022] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NonExistentReplica to NewReplica (state.change.logger) 23:16:37 policy-pap | security.providers = null 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:08.385081645Z level=info msg="Migration successfully executed" id="remove permission role_id action scope index" duration=1.156189ms 23:16:37 kafka | [2024-04-10 23:14:40,022] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NonExistentReplica to NewReplica (state.change.logger) 23:16:37 policy-pap | send.buffer.bytes = 131072 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:08.388603602Z level=info msg="Executing migration" id="create query_history table v1" 23:16:37 kafka | [2024-04-10 23:14:40,022] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NonExistentReplica to NewReplica (state.change.logger) 23:16:37 policy-pap | session.timeout.ms = 45000 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:08.390042468Z level=info msg="Migration successfully executed" id="create query_history table v1" duration=1.436606ms 23:16:37 kafka | [2024-04-10 23:14:40,022] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NonExistentReplica to NewReplica (state.change.logger) 23:16:37 policy-pap | socket.connection.setup.timeout.max.ms = 30000 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:08.393668968Z level=info msg="Executing migration" id="add index query_history.org_id-created_by-datasource_uid" 23:16:37 kafka | [2024-04-10 23:14:40,022] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NonExistentReplica to NewReplica (state.change.logger) 23:16:37 policy-pap | socket.connection.setup.timeout.ms = 10000 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:08.395489394Z level=info msg="Migration successfully executed" id="add index query_history.org_id-created_by-datasource_uid" duration=1.820146ms 23:16:37 kafka | [2024-04-10 23:14:40,022] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NonExistentReplica to NewReplica (state.change.logger) 23:16:37 policy-pap | ssl.cipher.suites = null 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:08.401143085Z level=info msg="Executing migration" id="alter table query_history alter column created_by type to bigint" 23:16:37 kafka | [2024-04-10 23:14:40,022] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NonExistentReplica to NewReplica (state.change.logger) 23:16:37 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:08.40134436Z level=info msg="Migration successfully executed" id="alter table query_history alter column created_by type to bigint" duration=200.655µs 23:16:37 kafka | [2024-04-10 23:14:40,022] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NonExistentReplica to NewReplica (state.change.logger) 23:16:37 policy-pap | ssl.endpoint.identification.algorithm = https 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:08.404672023Z level=info msg="Executing migration" id="rbac disabled migrator" 23:16:37 kafka | [2024-04-10 23:14:40,022] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NonExistentReplica to NewReplica (state.change.logger) 23:16:37 policy-pap | ssl.engine.factory.class = null 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:08.404767915Z level=info msg="Migration successfully executed" id="rbac disabled migrator" duration=96.802µs 23:16:37 kafka | [2024-04-10 23:14:40,022] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NonExistentReplica to NewReplica (state.change.logger) 23:16:37 policy-pap | ssl.key.password = null 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:08.411063552Z level=info msg="Executing migration" id="teams permissions migration" 23:16:37 kafka | [2024-04-10 23:14:40,022] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NonExistentReplica to NewReplica (state.change.logger) 23:16:37 policy-pap | ssl.keymanager.algorithm = SunX509 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:08.411935574Z level=info msg="Migration successfully executed" id="teams permissions migration" duration=871.792µs 23:16:37 kafka | [2024-04-10 23:14:40,022] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NonExistentReplica to NewReplica (state.change.logger) 23:16:37 policy-pap | ssl.keystore.certificate.chain = null 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:08.417488682Z level=info msg="Executing migration" id="dashboard permissions" 23:16:37 kafka | [2024-04-10 23:14:40,022] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NonExistentReplica to NewReplica (state.change.logger) 23:16:37 policy-pap | ssl.keystore.key = null 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:08.418534808Z level=info msg="Migration successfully executed" id="dashboard permissions" duration=1.047986ms 23:16:37 kafka | [2024-04-10 23:14:40,022] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NonExistentReplica to NewReplica (state.change.logger) 23:16:37 policy-pap | ssl.keystore.location = null 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:08.423677046Z level=info msg="Executing migration" id="dashboard permissions uid scopes" 23:16:37 kafka | [2024-04-10 23:14:40,022] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NonExistentReplica to NewReplica (state.change.logger) 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:08.424904366Z level=info msg="Migration successfully executed" id="dashboard permissions uid scopes" duration=1.2271ms 23:16:37 kafka | [2024-04-10 23:14:40,022] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NonExistentReplica to NewReplica (state.change.logger) 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:08.430062975Z level=info msg="Executing migration" id="drop managed folder create actions" 23:16:37 kafka | [2024-04-10 23:14:40,022] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NonExistentReplica to NewReplica (state.change.logger) 23:16:37 policy-pap | ssl.keystore.password = null 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:08.430385883Z level=info msg="Migration successfully executed" id="drop managed folder create actions" duration=321.638µs 23:16:37 kafka | [2024-04-10 23:14:40,022] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NonExistentReplica to NewReplica (state.change.logger) 23:16:37 policy-pap | ssl.keystore.type = JKS 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:08.434558598Z level=info msg="Executing migration" id="alerting notification permissions" 23:16:37 kafka | [2024-04-10 23:14:40,022] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NonExistentReplica to NewReplica (state.change.logger) 23:16:37 policy-pap | ssl.protocol = TLSv1.3 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:08.435443039Z level=info msg="Migration successfully executed" id="alerting notification permissions" duration=882.972µs 23:16:37 kafka | [2024-04-10 23:14:40,022] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NonExistentReplica to NewReplica (state.change.logger) 23:16:37 policy-pap | ssl.provider = null 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:08.439187122Z level=info msg="Executing migration" id="create query_history_star table v1" 23:16:37 kafka | [2024-04-10 23:14:40,022] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NonExistentReplica to NewReplica (state.change.logger) 23:16:37 policy-pap | ssl.secure.random.implementation = null 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:08.440660249Z level=info msg="Migration successfully executed" id="create query_history_star table v1" duration=1.471857ms 23:16:37 kafka | [2024-04-10 23:14:40,022] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NonExistentReplica to NewReplica (state.change.logger) 23:16:37 policy-pap | ssl.trustmanager.algorithm = PKIX 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:08.447676264Z level=info msg="Executing migration" id="add index query_history.user_id-query_uid" 23:16:37 kafka | [2024-04-10 23:14:40,022] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NonExistentReplica to NewReplica (state.change.logger) 23:16:37 policy-pap | ssl.truststore.certificates = null 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:08.448919935Z level=info msg="Migration successfully executed" id="add index query_history.user_id-query_uid" duration=1.243371ms 23:16:37 kafka | [2024-04-10 23:14:40,022] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NonExistentReplica to NewReplica (state.change.logger) 23:16:37 policy-pap | ssl.truststore.location = null 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:08.455106399Z level=info msg="Executing migration" id="add column org_id in query_history_star" 23:16:37 policy-pap | ssl.truststore.password = null 23:16:37 kafka | [2024-04-10 23:14:40,022] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NonExistentReplica to NewReplica (state.change.logger) 23:16:37 grafana | logger=migrator t=2024-04-10T23:14:08.465821626Z level=info msg="Migration successfully executed" id="add column org_id in query_history_star" duration=10.716197ms 23:16:38 policy-pap | ssl.truststore.type = JKS 23:16:38 kafka | [2024-04-10 23:14:40,022] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NonExistentReplica to NewReplica (state.change.logger) 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:08.472723178Z level=info msg="Executing migration" id="alter table query_history_star_mig column user_id type to bigint" 23:16:38 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:38 kafka | [2024-04-10 23:14:40,022] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NonExistentReplica to NewReplica (state.change.logger) 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:08.473092008Z level=info msg="Migration successfully executed" id="alter table query_history_star_mig column user_id type to bigint" duration=369.69µs 23:16:38 policy-pap | 23:16:38 kafka | [2024-04-10 23:14:40,022] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NonExistentReplica to NewReplica (state.change.logger) 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:08.479116668Z level=info msg="Executing migration" id="create correlation table v1" 23:16:38 policy-pap | [2024-04-10T23:14:39.359+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 23:16:38 kafka | [2024-04-10 23:14:40,022] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NonExistentReplica to NewReplica (state.change.logger) 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:08.480280816Z level=info msg="Migration successfully executed" id="create correlation table v1" duration=1.161888ms 23:16:38 policy-pap | [2024-04-10T23:14:39.359+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 23:16:38 kafka | [2024-04-10 23:14:40,022] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NonExistentReplica to NewReplica (state.change.logger) 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:08.484869241Z level=info msg="Executing migration" id="add index correlations.uid" 23:16:38 policy-pap | [2024-04-10T23:14:39.359+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1712790879358 23:16:38 kafka | [2024-04-10 23:14:40,022] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NonExistentReplica to NewReplica (state.change.logger) 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:08.4860644Z level=info msg="Migration successfully executed" id="add index correlations.uid" duration=1.195629ms 23:16:38 policy-pap | [2024-04-10T23:14:39.359+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-9f4a6b38-834c-48e5-bf2a-977246f9eaf0-3, groupId=9f4a6b38-834c-48e5-bf2a-977246f9eaf0] Subscribed to topic(s): policy-pdp-pap 23:16:38 kafka | [2024-04-10 23:14:40,022] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NonExistentReplica to NewReplica (state.change.logger) 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:08.493480655Z level=info msg="Executing migration" id="add index correlations.source_uid" 23:16:38 kafka | [2024-04-10 23:14:40,022] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NonExistentReplica to NewReplica (state.change.logger) 23:16:38 policy-pap | [2024-04-10T23:14:39.359+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Message Dispatcher 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:08.494701365Z level=info msg="Migration successfully executed" id="add index correlations.source_uid" duration=1.22051ms 23:16:38 kafka | [2024-04-10 23:14:40,022] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NonExistentReplica to NewReplica (state.change.logger) 23:16:38 policy-pap | [2024-04-10T23:14:39.359+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=cf7dae3d-d1d4-467e-b3c4-7c3ddf491bf7, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@2ea0161f 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:08.501320451Z level=info msg="Executing migration" id="add correlation config column" 23:16:38 kafka | [2024-04-10 23:14:40,023] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 23:16:38 policy-pap | [2024-04-10T23:14:39.360+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=cf7dae3d-d1d4-467e-b3c4-7c3ddf491bf7, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 23:16:38 kafka | [2024-04-10 23:14:40,216] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:08.511682328Z level=info msg="Migration successfully executed" id="add correlation config column" duration=10.362567ms 23:16:38 policy-pap | [2024-04-10T23:14:39.360+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 23:16:38 kafka | [2024-04-10 23:14:40,216] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:08.51498215Z level=info msg="Executing migration" id="drop index IDX_correlation_uid - v1" 23:16:38 policy-pap | allow.auto.create.topics = true 23:16:38 kafka | [2024-04-10 23:14:40,216] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:08.516101699Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_uid - v1" duration=1.119539ms 23:16:38 policy-pap | auto.commit.interval.ms = 5000 23:16:38 kafka | [2024-04-10 23:14:40,216] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:08.520723774Z level=info msg="Executing migration" id="drop index IDX_correlation_source_uid - v1" 23:16:38 policy-pap | auto.include.jmx.reporter = true 23:16:38 kafka | [2024-04-10 23:14:40,217] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:08.521815591Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_source_uid - v1" duration=1.092087ms 23:16:38 policy-pap | auto.offset.reset = latest 23:16:38 kafka | [2024-04-10 23:14:40,217] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:08.526276242Z level=info msg="Executing migration" id="Rename table correlation to correlation_tmp_qwerty - v1" 23:16:38 policy-pap | bootstrap.servers = [kafka:9092] 23:16:38 kafka | [2024-04-10 23:14:40,217] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:08.54828005Z level=info msg="Migration successfully executed" id="Rename table correlation to correlation_tmp_qwerty - v1" duration=22.004138ms 23:16:38 policy-pap | check.crcs = true 23:16:38 kafka | [2024-04-10 23:14:40,217] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:08.551390338Z level=info msg="Executing migration" id="create correlation v2" 23:16:38 policy-pap | client.dns.lookup = use_all_dns_ips 23:16:38 kafka | [2024-04-10 23:14:40,217] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:08.55226489Z level=info msg="Migration successfully executed" id="create correlation v2" duration=873.873µs 23:16:38 policy-pap | client.id = consumer-policy-pap-4 23:16:38 kafka | [2024-04-10 23:14:40,217] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:08.557442589Z level=info msg="Executing migration" id="create index IDX_correlation_uid - v2" 23:16:38 policy-pap | client.rack = 23:16:38 kafka | [2024-04-10 23:14:40,217] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:08.558335721Z level=info msg="Migration successfully executed" id="create index IDX_correlation_uid - v2" duration=892.592µs 23:16:38 policy-pap | connections.max.idle.ms = 540000 23:16:38 kafka | [2024-04-10 23:14:40,217] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:08.562217337Z level=info msg="Executing migration" id="create index IDX_correlation_source_uid - v2" 23:16:38 policy-pap | default.api.timeout.ms = 60000 23:16:38 kafka | [2024-04-10 23:14:40,217] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:08.564119385Z level=info msg="Migration successfully executed" id="create index IDX_correlation_source_uid - v2" duration=1.905138ms 23:16:38 policy-pap | enable.auto.commit = true 23:16:38 kafka | [2024-04-10 23:14:40,217] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:08.571114139Z level=info msg="Executing migration" id="create index IDX_correlation_org_id - v2" 23:16:38 policy-pap | exclude.internal.topics = true 23:16:38 kafka | [2024-04-10 23:14:40,217] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:08.572296739Z level=info msg="Migration successfully executed" id="create index IDX_correlation_org_id - v2" duration=1.18242ms 23:16:38 policy-pap | fetch.max.bytes = 52428800 23:16:38 kafka | [2024-04-10 23:14:40,217] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:08.608397388Z level=info msg="Executing migration" id="copy correlation v1 to v2" 23:16:38 policy-pap | fetch.max.wait.ms = 500 23:16:38 kafka | [2024-04-10 23:14:40,217] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:08.608978123Z level=info msg="Migration successfully executed" id="copy correlation v1 to v2" duration=581.635µs 23:16:38 policy-pap | fetch.min.bytes = 1 23:16:38 kafka | [2024-04-10 23:14:40,217] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:08.613148276Z level=info msg="Executing migration" id="drop correlation_tmp_qwerty" 23:16:38 policy-pap | group.id = policy-pap 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:08.614551452Z level=info msg="Migration successfully executed" id="drop correlation_tmp_qwerty" duration=1.403885ms 23:16:38 kafka | [2024-04-10 23:14:40,217] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:38 policy-pap | group.instance.id = null 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:08.618662574Z level=info msg="Executing migration" id="add provisioning column" 23:16:38 kafka | [2024-04-10 23:14:40,217] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:38 policy-pap | heartbeat.interval.ms = 3000 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:08.630234693Z level=info msg="Migration successfully executed" id="add provisioning column" duration=11.57232ms 23:16:38 kafka | [2024-04-10 23:14:40,217] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:38 policy-pap | interceptor.classes = [] 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:08.634698703Z level=info msg="Executing migration" id="create entity_events table" 23:16:38 kafka | [2024-04-10 23:14:40,217] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:38 policy-pap | internal.leave.group.on.close = true 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:08.635405101Z level=info msg="Migration successfully executed" id="create entity_events table" duration=706.198µs 23:16:38 kafka | [2024-04-10 23:14:40,217] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:38 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:08.639520374Z level=info msg="Executing migration" id="create dashboard public config v1" 23:16:38 kafka | [2024-04-10 23:14:40,217] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:38 policy-pap | isolation.level = read_uncommitted 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:08.64059678Z level=info msg="Migration successfully executed" id="create dashboard public config v1" duration=1.075416ms 23:16:38 kafka | [2024-04-10 23:14:40,217] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:38 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:08.64457725Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v1" 23:16:38 kafka | [2024-04-10 23:14:40,217] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:38 policy-pap | max.partition.fetch.bytes = 1048576 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:08.645268057Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index UQE_dashboard_public_config_uid - v1" 23:16:38 kafka | [2024-04-10 23:14:40,217] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:38 policy-pap | max.poll.interval.ms = 300000 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:08.650416475Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 23:16:38 kafka | [2024-04-10 23:14:40,218] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:38 policy-pap | max.poll.records = 500 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:08.650972149Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 23:16:38 kafka | [2024-04-10 23:14:40,218] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:38 policy-pap | metadata.max.age.ms = 300000 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:08.65462464Z level=info msg="Executing migration" id="Drop old dashboard public config table" 23:16:38 kafka | [2024-04-10 23:14:40,218] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:38 policy-pap | metric.reporters = [] 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:08.655484902Z level=info msg="Migration successfully executed" id="Drop old dashboard public config table" duration=859.712µs 23:16:38 kafka | [2024-04-10 23:14:40,218] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:38 policy-pap | metrics.num.samples = 2 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:08.659965603Z level=info msg="Executing migration" id="recreate dashboard public config v1" 23:16:38 kafka | [2024-04-10 23:14:40,218] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:38 policy-pap | metrics.recording.level = INFO 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:08.661297336Z level=info msg="Migration successfully executed" id="recreate dashboard public config v1" duration=1.331683ms 23:16:38 kafka | [2024-04-10 23:14:40,218] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:38 policy-pap | metrics.sample.window.ms = 30000 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:08.666551287Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v1" 23:16:38 kafka | [2024-04-10 23:14:40,218] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:38 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:08.668570707Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v1" duration=2.01514ms 23:16:38 kafka | [2024-04-10 23:14:40,218] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:38 policy-pap | receive.buffer.bytes = 65536 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:08.674285549Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 23:16:38 kafka | [2024-04-10 23:14:40,218] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:38 policy-pap | reconnect.backoff.max.ms = 1000 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:08.675690295Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" duration=1.402926ms 23:16:38 kafka | [2024-04-10 23:14:40,218] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:38 policy-pap | reconnect.backoff.ms = 50 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:08.680678109Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v2" 23:16:38 kafka | [2024-04-10 23:14:40,218] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:38 policy-pap | request.timeout.ms = 30000 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:08.681956891Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_public_config_uid - v2" duration=1.279522ms 23:16:38 kafka | [2024-04-10 23:14:40,218] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:38 policy-pap | retry.backoff.ms = 100 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:08.686355121Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" 23:16:38 kafka | [2024-04-10 23:14:40,218] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:38 policy-pap | sasl.client.callback.handler.class = null 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:08.68755977Z level=info msg="Migration successfully executed" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.204929ms 23:16:38 kafka | [2024-04-10 23:14:40,218] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:38 policy-pap | sasl.jaas.config = null 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:08.691332964Z level=info msg="Executing migration" id="Drop public config table" 23:16:38 kafka | [2024-04-10 23:14:40,218] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:38 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:08.69234052Z level=info msg="Migration successfully executed" id="Drop public config table" duration=1.007536ms 23:16:38 kafka | [2024-04-10 23:14:40,218] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:38 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:08.696927423Z level=info msg="Executing migration" id="Recreate dashboard public config v2" 23:16:38 kafka | [2024-04-10 23:14:40,218] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:38 policy-pap | sasl.kerberos.service.name = null 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:08.698191705Z level=info msg="Migration successfully executed" id="Recreate dashboard public config v2" duration=1.264692ms 23:16:38 kafka | [2024-04-10 23:14:40,218] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:38 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:08.701825866Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v2" 23:16:38 kafka | [2024-04-10 23:14:40,218] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:38 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:08.703051337Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v2" duration=1.225421ms 23:16:38 kafka | [2024-04-10 23:14:40,218] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:38 policy-pap | sasl.login.callback.handler.class = null 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:08.707572289Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" 23:16:38 kafka | [2024-04-10 23:14:40,218] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:38 policy-pap | sasl.login.class = null 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:08.709716292Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=2.142603ms 23:16:38 kafka | [2024-04-10 23:14:40,218] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:38 policy-pap | sasl.login.connect.timeout.ms = null 23:16:38 kafka | [2024-04-10 23:14:40,218] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:08.714785969Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_access_token - v2" 23:16:38 policy-pap | sasl.login.read.timeout.ms = null 23:16:38 kafka | [2024-04-10 23:14:40,218] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:08.716714406Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_access_token - v2" duration=1.929017ms 23:16:38 policy-pap | sasl.login.refresh.buffer.seconds = 300 23:16:38 kafka | [2024-04-10 23:14:40,227] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-13 (state.change.logger) 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:08.721377223Z level=info msg="Executing migration" id="Rename table dashboard_public_config to dashboard_public - v2" 23:16:38 policy-pap | sasl.login.refresh.min.period.seconds = 60 23:16:38 kafka | [2024-04-10 23:14:40,227] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-46 (state.change.logger) 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:08.745644748Z level=info msg="Migration successfully executed" id="Rename table dashboard_public_config to dashboard_public - v2" duration=24.266495ms 23:16:38 policy-pap | sasl.login.refresh.window.factor = 0.8 23:16:38 kafka | [2024-04-10 23:14:40,227] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-9 (state.change.logger) 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:08.751385201Z level=info msg="Executing migration" id="add annotations_enabled column" 23:16:38 policy-pap | sasl.login.refresh.window.jitter = 0.05 23:16:38 kafka | [2024-04-10 23:14:40,227] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-42 (state.change.logger) 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:08.757864952Z level=info msg="Migration successfully executed" id="add annotations_enabled column" duration=6.478611ms 23:16:38 policy-pap | sasl.login.retry.backoff.max.ms = 10000 23:16:38 kafka | [2024-04-10 23:14:40,227] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-21 (state.change.logger) 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:08.76140919Z level=info msg="Executing migration" id="add time_selection_enabled column" 23:16:38 policy-pap | sasl.login.retry.backoff.ms = 100 23:16:38 kafka | [2024-04-10 23:14:40,227] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-17 (state.change.logger) 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:08.769916612Z level=info msg="Migration successfully executed" id="add time_selection_enabled column" duration=8.507142ms 23:16:38 policy-pap | sasl.mechanism = GSSAPI 23:16:38 kafka | [2024-04-10 23:14:40,227] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-30 (state.change.logger) 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:08.773568603Z level=info msg="Executing migration" id="delete orphaned public dashboards" 23:16:38 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 23:16:38 kafka | [2024-04-10 23:14:40,227] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-26 (state.change.logger) 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:08.773884301Z level=info msg="Migration successfully executed" id="delete orphaned public dashboards" duration=316.348µs 23:16:38 policy-pap | sasl.oauthbearer.expected.audience = null 23:16:38 kafka | [2024-04-10 23:14:40,227] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-5 (state.change.logger) 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:08.777215234Z level=info msg="Executing migration" id="add share column" 23:16:38 policy-pap | sasl.oauthbearer.expected.issuer = null 23:16:38 kafka | [2024-04-10 23:14:40,227] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-38 (state.change.logger) 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:08.785967242Z level=info msg="Migration successfully executed" id="add share column" duration=8.752068ms 23:16:38 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:38 kafka | [2024-04-10 23:14:40,227] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-1 (state.change.logger) 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:08.791285215Z level=info msg="Executing migration" id="backfill empty share column fields with default of public" 23:16:38 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:38 kafka | [2024-04-10 23:14:40,227] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-34 (state.change.logger) 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:08.791613633Z level=info msg="Migration successfully executed" id="backfill empty share column fields with default of public" duration=328.509µs 23:16:38 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:38 kafka | [2024-04-10 23:14:40,227] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-16 (state.change.logger) 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:08.795263864Z level=info msg="Executing migration" id="create file table" 23:16:38 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:08.797378286Z level=info msg="Migration successfully executed" id="create file table" duration=2.114062ms 23:16:38 kafka | [2024-04-10 23:14:40,227] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-45 (state.change.logger) 23:16:38 policy-pap | sasl.oauthbearer.scope.claim.name = scope 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:08.80151328Z level=info msg="Executing migration" id="file table idx: path natural pk" 23:16:38 kafka | [2024-04-10 23:14:40,227] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-12 (state.change.logger) 23:16:38 policy-pap | sasl.oauthbearer.sub.claim.name = sub 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:08.803520249Z level=info msg="Migration successfully executed" id="file table idx: path natural pk" duration=2.007469ms 23:16:38 kafka | [2024-04-10 23:14:40,227] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-41 (state.change.logger) 23:16:38 policy-pap | sasl.oauthbearer.token.endpoint.url = null 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:08.809845757Z level=info msg="Executing migration" id="file table idx: parent_folder_path_hash fast folder retrieval" 23:16:38 kafka | [2024-04-10 23:14:40,227] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-24 (state.change.logger) 23:16:38 policy-pap | security.protocol = PLAINTEXT 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:08.811086718Z level=info msg="Migration successfully executed" id="file table idx: parent_folder_path_hash fast folder retrieval" duration=1.241231ms 23:16:38 kafka | [2024-04-10 23:14:40,227] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-20 (state.change.logger) 23:16:38 policy-pap | security.providers = null 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:08.815126939Z level=info msg="Executing migration" id="create file_meta table" 23:16:38 kafka | [2024-04-10 23:14:40,227] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-49 (state.change.logger) 23:16:38 policy-pap | send.buffer.bytes = 131072 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:08.816554404Z level=info msg="Migration successfully executed" id="create file_meta table" duration=1.427335ms 23:16:38 kafka | [2024-04-10 23:14:40,227] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-0 (state.change.logger) 23:16:38 policy-pap | session.timeout.ms = 45000 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:08.822090852Z level=info msg="Executing migration" id="file table idx: path key" 23:16:38 kafka | [2024-04-10 23:14:40,227] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-29 (state.change.logger) 23:16:38 policy-pap | socket.connection.setup.timeout.max.ms = 30000 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:08.823388985Z level=info msg="Migration successfully executed" id="file table idx: path key" duration=1.297813ms 23:16:38 kafka | [2024-04-10 23:14:40,227] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-25 (state.change.logger) 23:16:38 policy-pap | socket.connection.setup.timeout.ms = 10000 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:08.833450765Z level=info msg="Executing migration" id="set path collation in file table" 23:16:38 kafka | [2024-04-10 23:14:40,228] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-8 (state.change.logger) 23:16:38 policy-pap | ssl.cipher.suites = null 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:08.833749542Z level=info msg="Migration successfully executed" id="set path collation in file table" duration=297.277µs 23:16:38 kafka | [2024-04-10 23:14:40,228] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-37 (state.change.logger) 23:16:38 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:08.837869575Z level=info msg="Executing migration" id="migrate contents column to mediumblob for MySQL" 23:16:38 kafka | [2024-04-10 23:14:40,228] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-4 (state.change.logger) 23:16:38 policy-pap | ssl.endpoint.identification.algorithm = https 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:08.838231984Z level=info msg="Migration successfully executed" id="migrate contents column to mediumblob for MySQL" duration=362.509µs 23:16:38 kafka | [2024-04-10 23:14:40,228] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-33 (state.change.logger) 23:16:38 policy-pap | ssl.engine.factory.class = null 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:08.842778247Z level=info msg="Executing migration" id="managed permissions migration" 23:16:38 kafka | [2024-04-10 23:14:40,228] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-15 (state.change.logger) 23:16:38 policy-pap | ssl.key.password = null 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:08.843834014Z level=info msg="Migration successfully executed" id="managed permissions migration" duration=1.055277ms 23:16:38 kafka | [2024-04-10 23:14:40,228] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-48 (state.change.logger) 23:16:38 policy-pap | ssl.keymanager.algorithm = SunX509 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:08.847921175Z level=info msg="Executing migration" id="managed folder permissions alert actions migration" 23:16:38 kafka | [2024-04-10 23:14:40,228] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-11 (state.change.logger) 23:16:38 policy-pap | ssl.keystore.certificate.chain = null 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:08.848191342Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions migration" duration=269.847µs 23:16:38 kafka | [2024-04-10 23:14:40,228] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-44 (state.change.logger) 23:16:38 policy-pap | ssl.keystore.key = null 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:08.853771461Z level=info msg="Executing migration" id="RBAC action name migrator" 23:16:38 kafka | [2024-04-10 23:14:40,228] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-23 (state.change.logger) 23:16:38 policy-pap | ssl.keystore.location = null 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:08.855094484Z level=info msg="Migration successfully executed" id="RBAC action name migrator" duration=1.325503ms 23:16:38 kafka | [2024-04-10 23:14:40,228] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-19 (state.change.logger) 23:16:38 policy-pap | ssl.keystore.password = null 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:08.858567481Z level=info msg="Executing migration" id="Add UID column to playlist" 23:16:38 policy-pap | ssl.keystore.type = JKS 23:16:38 kafka | [2024-04-10 23:14:40,228] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-32 (state.change.logger) 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:08.867730629Z level=info msg="Migration successfully executed" id="Add UID column to playlist" duration=9.162708ms 23:16:38 policy-pap | ssl.protocol = TLSv1.3 23:16:38 kafka | [2024-04-10 23:14:40,228] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-28 (state.change.logger) 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:08.870912219Z level=info msg="Executing migration" id="Update uid column values in playlist" 23:16:38 policy-pap | ssl.provider = null 23:16:38 kafka | [2024-04-10 23:14:40,228] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-7 (state.change.logger) 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:08.871135994Z level=info msg="Migration successfully executed" id="Update uid column values in playlist" duration=220.395µs 23:16:38 policy-pap | ssl.secure.random.implementation = null 23:16:38 kafka | [2024-04-10 23:14:40,228] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-40 (state.change.logger) 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:08.874871737Z level=info msg="Executing migration" id="Add index for uid in playlist" 23:16:38 policy-pap | ssl.trustmanager.algorithm = PKIX 23:16:38 kafka | [2024-04-10 23:14:40,228] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-3 (state.change.logger) 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:08.876093057Z level=info msg="Migration successfully executed" id="Add index for uid in playlist" duration=1.22121ms 23:16:38 policy-pap | ssl.truststore.certificates = null 23:16:38 kafka | [2024-04-10 23:14:40,228] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-36 (state.change.logger) 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:08.881363299Z level=info msg="Executing migration" id="update group index for alert rules" 23:16:38 policy-pap | ssl.truststore.location = null 23:16:38 kafka | [2024-04-10 23:14:40,228] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-47 (state.change.logger) 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:08.881803819Z level=info msg="Migration successfully executed" id="update group index for alert rules" duration=441.371µs 23:16:38 policy-pap | ssl.truststore.password = null 23:16:38 kafka | [2024-04-10 23:14:40,228] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-14 (state.change.logger) 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:08.885879401Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated migration" 23:16:38 policy-pap | ssl.truststore.type = JKS 23:16:38 kafka | [2024-04-10 23:14:40,228] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-43 (state.change.logger) 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:08.886142478Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated migration" duration=260.267µs 23:16:38 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:38 kafka | [2024-04-10 23:14:40,228] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-10 (state.change.logger) 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:08.889554323Z level=info msg="Executing migration" id="admin only folder/dashboard permission" 23:16:38 policy-pap | 23:16:38 kafka | [2024-04-10 23:14:40,228] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-22 (state.change.logger) 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:08.890082586Z level=info msg="Migration successfully executed" id="admin only folder/dashboard permission" duration=528.043µs 23:16:38 policy-pap | [2024-04-10T23:14:39.365+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 23:16:38 kafka | [2024-04-10 23:14:40,228] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-18 (state.change.logger) 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:08.89423438Z level=info msg="Executing migration" id="add action column to seed_assignment" 23:16:38 policy-pap | [2024-04-10T23:14:39.365+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 23:16:38 kafka | [2024-04-10 23:14:40,228] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-31 (state.change.logger) 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:08.90348701Z level=info msg="Migration successfully executed" id="add action column to seed_assignment" duration=9.25109ms 23:16:38 policy-pap | [2024-04-10T23:14:39.365+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1712790879365 23:16:38 kafka | [2024-04-10 23:14:40,229] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-27 (state.change.logger) 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:08.906966136Z level=info msg="Executing migration" id="add scope column to seed_assignment" 23:16:38 policy-pap | [2024-04-10T23:14:39.365+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap 23:16:38 kafka | [2024-04-10 23:14:40,229] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-39 (state.change.logger) 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:08.915941301Z level=info msg="Migration successfully executed" id="add scope column to seed_assignment" duration=8.974595ms 23:16:38 policy-pap | [2024-04-10T23:14:39.365+00:00|INFO|ServiceManager|main] Policy PAP starting topics 23:16:38 kafka | [2024-04-10 23:14:40,229] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-6 (state.change.logger) 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:08.919503959Z level=info msg="Executing migration" id="remove unique index builtin_role_role_name before nullable update" 23:16:38 kafka | [2024-04-10 23:14:40,229] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-35 (state.change.logger) 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:08.92074738Z level=info msg="Migration successfully executed" id="remove unique index builtin_role_role_name before nullable update" duration=1.243391ms 23:16:38 policy-pap | [2024-04-10T23:14:39.365+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=cf7dae3d-d1d4-467e-b3c4-7c3ddf491bf7, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-heartbeat,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 23:16:38 kafka | [2024-04-10 23:14:40,229] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition policy-pdp-pap-0 (state.change.logger) 23:16:38 policy-pap | [2024-04-10T23:14:39.365+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=9f4a6b38-834c-48e5-bf2a-977246f9eaf0, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 23:16:38 kafka | [2024-04-10 23:14:40,229] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-2 (state.change.logger) 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:08.925017797Z level=info msg="Executing migration" id="update seed_assignment role_name column to nullable" 23:16:38 policy-pap | [2024-04-10T23:14:39.366+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=f5ddb507-c74e-4f62-96ec-39f542968bbb, alive=false, publisher=null]]: starting 23:16:38 kafka | [2024-04-10 23:14:40,232] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 51 become-leader and 0 become-follower partitions (state.change.logger) 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:08.99780124Z level=info msg="Migration successfully executed" id="update seed_assignment role_name column to nullable" duration=72.784714ms 23:16:38 policy-pap | [2024-04-10T23:14:39.383+00:00|INFO|ProducerConfig|main] ProducerConfig values: 23:16:38 kafka | [2024-04-10 23:14:40,241] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 51 partitions (state.change.logger) 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:09.001326517Z level=info msg="Executing migration" id="add unique index builtin_role_name back" 23:16:38 policy-pap | acks = -1 23:16:38 kafka | [2024-04-10 23:14:40,244] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NewReplica to OnlineReplica (state.change.logger) 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:09.00261329Z level=info msg="Migration successfully executed" id="add unique index builtin_role_name back" duration=1.285953ms 23:16:38 policy-pap | auto.include.jmx.reporter = true 23:16:38 kafka | [2024-04-10 23:14:40,244] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NewReplica to OnlineReplica (state.change.logger) 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:09.008173439Z level=info msg="Executing migration" id="add unique index builtin_role_action_scope" 23:16:38 policy-pap | batch.size = 16384 23:16:38 kafka | [2024-04-10 23:14:40,244] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NewReplica to OnlineReplica (state.change.logger) 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:09.009415179Z level=info msg="Migration successfully executed" id="add unique index builtin_role_action_scope" duration=1.241201ms 23:16:38 policy-pap | bootstrap.servers = [kafka:9092] 23:16:38 kafka | [2024-04-10 23:14:40,244] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NewReplica to OnlineReplica (state.change.logger) 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:09.015853749Z level=info msg="Executing migration" id="add primary key to seed_assigment" 23:16:38 policy-pap | buffer.memory = 33554432 23:16:38 kafka | [2024-04-10 23:14:40,244] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NewReplica to OnlineReplica (state.change.logger) 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:09.040492053Z level=info msg="Migration successfully executed" id="add primary key to seed_assigment" duration=24.636544ms 23:16:38 policy-pap | client.dns.lookup = use_all_dns_ips 23:16:38 kafka | [2024-04-10 23:14:40,244] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NewReplica to OnlineReplica (state.change.logger) 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:09.044788041Z level=info msg="Executing migration" id="add origin column to seed_assignment" 23:16:38 policy-pap | client.id = producer-1 23:16:38 kafka | [2024-04-10 23:14:40,244] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NewReplica to OnlineReplica (state.change.logger) 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:09.05160894Z level=info msg="Migration successfully executed" id="add origin column to seed_assignment" duration=6.819589ms 23:16:38 policy-pap | compression.type = none 23:16:38 kafka | [2024-04-10 23:14:40,244] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NewReplica to OnlineReplica (state.change.logger) 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:09.056561804Z level=info msg="Executing migration" id="add origin to plugin seed_assignment" 23:16:38 policy-pap | connections.max.idle.ms = 540000 23:16:38 kafka | [2024-04-10 23:14:40,244] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NewReplica to OnlineReplica (state.change.logger) 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:09.056933593Z level=info msg="Migration successfully executed" id="add origin to plugin seed_assignment" duration=371.209µs 23:16:38 policy-pap | delivery.timeout.ms = 120000 23:16:38 kafka | [2024-04-10 23:14:40,244] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NewReplica to OnlineReplica (state.change.logger) 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:09.061508697Z level=info msg="Executing migration" id="prevent seeding OnCall access" 23:16:38 policy-pap | enable.idempotence = true 23:16:38 kafka | [2024-04-10 23:14:40,244] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NewReplica to OnlineReplica (state.change.logger) 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:09.061729023Z level=info msg="Migration successfully executed" id="prevent seeding OnCall access" duration=219.836µs 23:16:38 policy-pap | interceptor.classes = [] 23:16:38 kafka | [2024-04-10 23:14:40,244] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NewReplica to OnlineReplica (state.change.logger) 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:09.065466685Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated fixed migration" 23:16:38 policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer 23:16:38 kafka | [2024-04-10 23:14:40,244] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NewReplica to OnlineReplica (state.change.logger) 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:09.065757513Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated fixed migration" duration=290.078µs 23:16:38 policy-pap | linger.ms = 0 23:16:38 kafka | [2024-04-10 23:14:40,244] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NewReplica to OnlineReplica (state.change.logger) 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:09.071745262Z level=info msg="Executing migration" id="managed folder permissions library panel actions migration" 23:16:38 policy-pap | max.block.ms = 60000 23:16:38 kafka | [2024-04-10 23:14:40,244] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NewReplica to OnlineReplica (state.change.logger) 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:09.072162732Z level=info msg="Migration successfully executed" id="managed folder permissions library panel actions migration" duration=417.29µs 23:16:38 policy-pap | max.in.flight.requests.per.connection = 5 23:16:38 kafka | [2024-04-10 23:14:40,244] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NewReplica to OnlineReplica (state.change.logger) 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:09.077007013Z level=info msg="Executing migration" id="migrate external alertmanagers to datsourcse" 23:16:38 policy-pap | max.request.size = 1048576 23:16:38 kafka | [2024-04-10 23:14:40,244] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NewReplica to OnlineReplica (state.change.logger) 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:09.077455294Z level=info msg="Migration successfully executed" id="migrate external alertmanagers to datsourcse" duration=447.991µs 23:16:38 policy-pap | metadata.max.age.ms = 300000 23:16:38 kafka | [2024-04-10 23:14:40,244] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NewReplica to OnlineReplica (state.change.logger) 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:09.081953406Z level=info msg="Executing migration" id="create folder table" 23:16:38 policy-pap | metadata.max.idle.ms = 300000 23:16:38 kafka | [2024-04-10 23:14:40,244] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NewReplica to OnlineReplica (state.change.logger) 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:09.083437793Z level=info msg="Migration successfully executed" id="create folder table" duration=1.485377ms 23:16:38 policy-pap | metric.reporters = [] 23:16:38 kafka | [2024-04-10 23:14:40,244] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NewReplica to OnlineReplica (state.change.logger) 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:09.089776281Z level=info msg="Executing migration" id="Add index for parent_uid" 23:16:38 policy-pap | metrics.num.samples = 2 23:16:38 kafka | [2024-04-10 23:14:40,244] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NewReplica to OnlineReplica (state.change.logger) 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:09.091242847Z level=info msg="Migration successfully executed" id="Add index for parent_uid" duration=1.465666ms 23:16:38 policy-pap | metrics.recording.level = INFO 23:16:38 kafka | [2024-04-10 23:14:40,244] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NewReplica to OnlineReplica (state.change.logger) 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:09.096207571Z level=info msg="Executing migration" id="Add unique index for folder.uid and folder.org_id" 23:16:38 policy-pap | metrics.sample.window.ms = 30000 23:16:38 kafka | [2024-04-10 23:14:40,245] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NewReplica to OnlineReplica (state.change.logger) 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:09.098336665Z level=info msg="Migration successfully executed" id="Add unique index for folder.uid and folder.org_id" duration=2.128904ms 23:16:38 policy-pap | partitioner.adaptive.partitioning.enable = true 23:16:38 kafka | [2024-04-10 23:14:40,245] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NewReplica to OnlineReplica (state.change.logger) 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:09.103337138Z level=info msg="Executing migration" id="Update folder title length" 23:16:38 policy-pap | partitioner.availability.timeout.ms = 0 23:16:38 kafka | [2024-04-10 23:14:40,245] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NewReplica to OnlineReplica (state.change.logger) 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:09.103416271Z level=info msg="Migration successfully executed" id="Update folder title length" duration=78.643µs 23:16:38 policy-pap | partitioner.class = null 23:16:38 kafka | [2024-04-10 23:14:40,245] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NewReplica to OnlineReplica (state.change.logger) 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:09.107175855Z level=info msg="Executing migration" id="Add unique index for folder.title and folder.parent_uid" 23:16:38 policy-pap | partitioner.ignore.keys = false 23:16:38 kafka | [2024-04-10 23:14:40,245] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NewReplica to OnlineReplica (state.change.logger) 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:09.108490667Z level=info msg="Migration successfully executed" id="Add unique index for folder.title and folder.parent_uid" duration=1.316152ms 23:16:38 policy-pap | receive.buffer.bytes = 32768 23:16:38 kafka | [2024-04-10 23:14:40,245] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NewReplica to OnlineReplica (state.change.logger) 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:09.113919062Z level=info msg="Executing migration" id="Remove unique index for folder.title and folder.parent_uid" 23:16:38 policy-pap | reconnect.backoff.max.ms = 1000 23:16:38 kafka | [2024-04-10 23:14:40,245] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NewReplica to OnlineReplica (state.change.logger) 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:09.115113003Z level=info msg="Migration successfully executed" id="Remove unique index for folder.title and folder.parent_uid" duration=1.193971ms 23:16:38 policy-pap | reconnect.backoff.ms = 50 23:16:38 kafka | [2024-04-10 23:14:40,245] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NewReplica to OnlineReplica (state.change.logger) 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:09.119913721Z level=info msg="Executing migration" id="Add unique index for title, parent_uid, and org_id" 23:16:38 policy-pap | request.timeout.ms = 30000 23:16:38 kafka | [2024-04-10 23:14:40,245] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NewReplica to OnlineReplica (state.change.logger) 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:09.121998404Z level=info msg="Migration successfully executed" id="Add unique index for title, parent_uid, and org_id" duration=2.084343ms 23:16:38 policy-pap | retries = 2147483647 23:16:38 kafka | [2024-04-10 23:14:40,245] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NewReplica to OnlineReplica (state.change.logger) 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:09.129629674Z level=info msg="Executing migration" id="Sync dashboard and folder table" 23:16:38 policy-pap | retry.backoff.ms = 100 23:16:38 kafka | [2024-04-10 23:14:40,245] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NewReplica to OnlineReplica (state.change.logger) 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:09.130170147Z level=info msg="Migration successfully executed" id="Sync dashboard and folder table" duration=540.493µs 23:16:38 policy-pap | sasl.client.callback.handler.class = null 23:16:38 kafka | [2024-04-10 23:14:40,245] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NewReplica to OnlineReplica (state.change.logger) 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:09.135549131Z level=info msg="Executing migration" id="Remove ghost folders from the folder table" 23:16:38 policy-pap | sasl.jaas.config = null 23:16:38 kafka | [2024-04-10 23:14:40,245] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NewReplica to OnlineReplica (state.change.logger) 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:09.136090755Z level=info msg="Migration successfully executed" id="Remove ghost folders from the folder table" duration=542.084µs 23:16:38 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:38 kafka | [2024-04-10 23:14:40,245] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NewReplica to OnlineReplica (state.change.logger) 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:09.140990467Z level=info msg="Executing migration" id="Remove unique index UQE_folder_uid_org_id" 23:16:38 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 23:16:38 kafka | [2024-04-10 23:14:40,245] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NewReplica to OnlineReplica (state.change.logger) 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:09.142895835Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_uid_org_id" duration=1.905108ms 23:16:38 policy-pap | sasl.kerberos.service.name = null 23:16:38 kafka | [2024-04-10 23:14:40,245] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NewReplica to OnlineReplica (state.change.logger) 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:09.147586561Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_uid" 23:16:38 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:38 kafka | [2024-04-10 23:14:40,245] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NewReplica to OnlineReplica (state.change.logger) 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:09.149013807Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_uid" duration=1.426316ms 23:16:38 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:38 kafka | [2024-04-10 23:14:40,245] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NewReplica to OnlineReplica (state.change.logger) 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:09.153832127Z level=info msg="Executing migration" id="Remove unique index UQE_folder_title_parent_uid_org_id" 23:16:38 policy-pap | sasl.login.callback.handler.class = null 23:16:38 kafka | [2024-04-10 23:14:40,245] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NewReplica to OnlineReplica (state.change.logger) 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:09.155751495Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_title_parent_uid_org_id" duration=1.918707ms 23:16:38 policy-pap | sasl.login.class = null 23:16:38 kafka | [2024-04-10 23:14:40,245] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NewReplica to OnlineReplica (state.change.logger) 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:09.160928174Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_parent_uid_title" 23:16:38 policy-pap | sasl.login.connect.timeout.ms = null 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:09.162277628Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_parent_uid_title" duration=1.349814ms 23:16:38 kafka | [2024-04-10 23:14:40,245] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NewReplica to OnlineReplica (state.change.logger) 23:16:38 policy-pap | sasl.login.read.timeout.ms = null 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:09.168330108Z level=info msg="Executing migration" id="Remove index IDX_folder_parent_uid_org_id" 23:16:38 kafka | [2024-04-10 23:14:40,245] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NewReplica to OnlineReplica (state.change.logger) 23:16:38 policy-pap | sasl.login.refresh.buffer.seconds = 300 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:09.16960366Z level=info msg="Migration successfully executed" id="Remove index IDX_folder_parent_uid_org_id" duration=1.274142ms 23:16:38 kafka | [2024-04-10 23:14:40,245] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NewReplica to OnlineReplica (state.change.logger) 23:16:38 policy-pap | sasl.login.refresh.min.period.seconds = 60 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:09.173692552Z level=info msg="Executing migration" id="create anon_device table" 23:16:38 kafka | [2024-04-10 23:14:40,245] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NewReplica to OnlineReplica (state.change.logger) 23:16:38 policy-pap | sasl.login.refresh.window.factor = 0.8 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:09.174732518Z level=info msg="Migration successfully executed" id="create anon_device table" duration=1.037546ms 23:16:38 kafka | [2024-04-10 23:14:40,245] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NewReplica to OnlineReplica (state.change.logger) 23:16:38 policy-pap | sasl.login.refresh.window.jitter = 0.05 23:16:38 kafka | [2024-04-10 23:14:40,246] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NewReplica to OnlineReplica (state.change.logger) 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:09.17962833Z level=info msg="Executing migration" id="add unique index anon_device.device_id" 23:16:38 policy-pap | sasl.login.retry.backoff.max.ms = 10000 23:16:38 kafka | [2024-04-10 23:14:40,246] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NewReplica to OnlineReplica (state.change.logger) 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:09.181809653Z level=info msg="Migration successfully executed" id="add unique index anon_device.device_id" duration=2.181293ms 23:16:38 policy-pap | sasl.login.retry.backoff.ms = 100 23:16:38 kafka | [2024-04-10 23:14:40,246] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NewReplica to OnlineReplica (state.change.logger) 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:09.186971372Z level=info msg="Executing migration" id="add index anon_device.updated_at" 23:16:38 policy-pap | sasl.mechanism = GSSAPI 23:16:38 kafka | [2024-04-10 23:14:40,246] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NewReplica to OnlineReplica (state.change.logger) 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:09.188497491Z level=info msg="Migration successfully executed" id="add index anon_device.updated_at" duration=1.531159ms 23:16:38 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 23:16:38 kafka | [2024-04-10 23:14:40,246] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:09.194830588Z level=info msg="Executing migration" id="create signing_key table" 23:16:38 policy-pap | sasl.oauthbearer.expected.audience = null 23:16:38 kafka | [2024-04-10 23:14:40,255] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 for 51 partitions (state.change.logger) 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:09.195867515Z level=info msg="Migration successfully executed" id="create signing_key table" duration=1.036127ms 23:16:38 policy-pap | sasl.oauthbearer.expected.issuer = null 23:16:38 kafka | [2024-04-10 23:14:40,258] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:09.199969946Z level=info msg="Executing migration" id="add unique index signing_key.key_id" 23:16:38 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:38 kafka | [2024-04-10 23:14:40,258] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:09.201406452Z level=info msg="Migration successfully executed" id="add unique index signing_key.key_id" duration=1.432666ms 23:16:38 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:38 kafka | [2024-04-10 23:14:40,258] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:09.208376226Z level=info msg="Executing migration" id="set legacy alert migration status in kvstore" 23:16:38 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:38 kafka | [2024-04-10 23:14:40,258] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:09.209648287Z level=info msg="Migration successfully executed" id="set legacy alert migration status in kvstore" duration=1.272111ms 23:16:38 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 23:16:38 kafka | [2024-04-10 23:14:40,258] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:09.217456632Z level=info msg="Executing migration" id="migrate record of created folders during legacy migration to kvstore" 23:16:38 policy-pap | sasl.oauthbearer.scope.claim.name = scope 23:16:38 kafka | [2024-04-10 23:14:40,258] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:09.218005585Z level=info msg="Migration successfully executed" id="migrate record of created folders during legacy migration to kvstore" duration=550.393µs 23:16:38 policy-pap | sasl.oauthbearer.sub.claim.name = sub 23:16:38 kafka | [2024-04-10 23:14:40,258] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:09.222074807Z level=info msg="Executing migration" id="Add folder_uid for dashboard" 23:16:38 policy-pap | sasl.oauthbearer.token.endpoint.url = null 23:16:38 kafka | [2024-04-10 23:14:40,258] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:09.235510982Z level=info msg="Migration successfully executed" id="Add folder_uid for dashboard" duration=13.437765ms 23:16:38 policy-pap | security.protocol = PLAINTEXT 23:16:38 kafka | [2024-04-10 23:14:40,258] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:09.23904593Z level=info msg="Executing migration" id="Populate dashboard folder_uid column" 23:16:38 policy-pap | security.providers = null 23:16:38 kafka | [2024-04-10 23:14:40,258] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:09.239613864Z level=info msg="Migration successfully executed" id="Populate dashboard folder_uid column" duration=568.564µs 23:16:38 policy-pap | send.buffer.bytes = 131072 23:16:38 kafka | [2024-04-10 23:14:40,258] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:09.243830279Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title" 23:16:38 policy-pap | socket.connection.setup.timeout.max.ms = 30000 23:16:38 kafka | [2024-04-10 23:14:40,258] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:09.245161552Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title" duration=1.330173ms 23:16:38 policy-pap | socket.connection.setup.timeout.ms = 10000 23:16:38 kafka | [2024-04-10 23:14:40,258] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:09.275207641Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_id_title" 23:16:38 policy-pap | ssl.cipher.suites = null 23:16:38 kafka | [2024-04-10 23:14:40,258] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:09.277005845Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_id_title" duration=1.797694ms 23:16:38 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:38 kafka | [2024-04-10 23:14:40,259] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:09.282624485Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_uid_title" 23:16:38 policy-pap | ssl.endpoint.identification.algorithm = https 23:16:38 kafka | [2024-04-10 23:14:40,259] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:09.28483736Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_uid_title" duration=2.212345ms 23:16:38 policy-pap | ssl.engine.factory.class = null 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:09.291707612Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" 23:16:38 kafka | [2024-04-10 23:14:40,259] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:38 policy-pap | ssl.key.password = null 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:09.293579628Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" duration=1.875346ms 23:16:38 kafka | [2024-04-10 23:14:40,259] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:38 policy-pap | ssl.keymanager.algorithm = SunX509 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:09.297435425Z level=info msg="Executing migration" id="Restore index for dashboard_org_id_folder_id_title" 23:16:38 kafka | [2024-04-10 23:14:40,259] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:38 policy-pap | ssl.keystore.certificate.chain = null 23:16:38 kafka | [2024-04-10 23:14:40,259] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:09.298775688Z level=info msg="Migration successfully executed" id="Restore index for dashboard_org_id_folder_id_title" duration=1.340063ms 23:16:38 policy-pap | ssl.keystore.key = null 23:16:38 kafka | [2024-04-10 23:14:40,259] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:38 policy-pap | ssl.keystore.location = null 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:09.302672935Z level=info msg="Executing migration" id="create sso_setting table" 23:16:38 kafka | [2024-04-10 23:14:40,259] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:38 policy-pap | ssl.keystore.password = null 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:09.303977787Z level=info msg="Migration successfully executed" id="create sso_setting table" duration=1.304382ms 23:16:38 kafka | [2024-04-10 23:14:40,259] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:38 policy-pap | ssl.keystore.type = JKS 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:09.310549701Z level=info msg="Executing migration" id="copy kvstore migration status to each org" 23:16:38 kafka | [2024-04-10 23:14:40,259] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:38 policy-pap | ssl.protocol = TLSv1.3 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:09.312203232Z level=info msg="Migration successfully executed" id="copy kvstore migration status to each org" duration=1.655161ms 23:16:38 kafka | [2024-04-10 23:14:40,259] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:38 policy-pap | ssl.provider = null 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:09.317855663Z level=info msg="Executing migration" id="add back entry for orgid=0 migrated status" 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:09.318255173Z level=info msg="Migration successfully executed" id="add back entry for orgid=0 migrated status" duration=400.52µs 23:16:38 kafka | [2024-04-10 23:14:40,259] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:38 policy-pap | ssl.secure.random.implementation = null 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:09.324119379Z level=info msg="Executing migration" id="alter kv_store.value to longtext" 23:16:38 kafka | [2024-04-10 23:14:40,259] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:38 policy-pap | ssl.trustmanager.algorithm = PKIX 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:09.324315844Z level=info msg="Migration successfully executed" id="alter kv_store.value to longtext" duration=199.675µs 23:16:38 kafka | [2024-04-10 23:14:40,259] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:38 policy-pap | ssl.truststore.certificates = null 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:09.328846487Z level=info msg="Executing migration" id="add notification_settings column to alert_rule table" 23:16:38 kafka | [2024-04-10 23:14:40,259] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:38 policy-pap | ssl.truststore.location = null 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:09.338258971Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule table" duration=9.411764ms 23:16:38 kafka | [2024-04-10 23:14:40,259] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:38 policy-pap | ssl.truststore.password = null 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:09.345698937Z level=info msg="Executing migration" id="add notification_settings column to alert_rule_version table" 23:16:38 kafka | [2024-04-10 23:14:40,259] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:38 policy-pap | ssl.truststore.type = JKS 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:09.355102201Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule_version table" duration=9.402584ms 23:16:38 kafka | [2024-04-10 23:14:40,259] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:38 policy-pap | transaction.timeout.ms = 60000 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:09.361836779Z level=info msg="Executing migration" id="removing scope from alert.instances:read action migration" 23:16:38 kafka | [2024-04-10 23:14:40,259] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:38 policy-pap | transactional.id = null 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:09.36226611Z level=info msg="Migration successfully executed" id="removing scope from alert.instances:read action migration" duration=428.831µs 23:16:38 kafka | [2024-04-10 23:14:40,259] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:38 policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer 23:16:38 grafana | logger=migrator t=2024-04-10T23:14:09.366980488Z level=info msg="migrations completed" performed=548 skipped=0 duration=4.328124923s 23:16:38 kafka | [2024-04-10 23:14:40,259] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:38 policy-pap | 23:16:38 grafana | logger=sqlstore t=2024-04-10T23:14:09.379661493Z level=info msg="Created default admin" user=admin 23:16:38 kafka | [2024-04-10 23:14:40,259] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:38 policy-pap | [2024-04-10T23:14:39.397+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. 23:16:38 grafana | logger=sqlstore t=2024-04-10T23:14:09.379925419Z level=info msg="Created default organization" 23:16:38 policy-pap | [2024-04-10T23:14:39.414+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 23:16:38 kafka | [2024-04-10 23:14:40,259] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:38 grafana | logger=secrets t=2024-04-10T23:14:09.385051067Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1 23:16:38 policy-pap | [2024-04-10T23:14:39.414+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 23:16:38 kafka | [2024-04-10 23:14:40,259] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:38 grafana | logger=plugin.store t=2024-04-10T23:14:09.407276951Z level=info msg="Loading plugins..." 23:16:38 policy-pap | [2024-04-10T23:14:39.414+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1712790879414 23:16:38 kafka | [2024-04-10 23:14:40,259] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:38 grafana | logger=local.finder t=2024-04-10T23:14:09.456632531Z level=warn msg="Skipping finding plugins as directory does not exist" path=/usr/share/grafana/plugins-bundled 23:16:38 kafka | [2024-04-10 23:14:40,259] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:38 policy-pap | [2024-04-10T23:14:39.415+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=f5ddb507-c74e-4f62-96ec-39f542968bbb, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created 23:16:38 grafana | logger=plugin.store t=2024-04-10T23:14:09.456667062Z level=info msg="Plugins loaded" count=55 duration=49.390221ms 23:16:38 kafka | [2024-04-10 23:14:40,259] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:38 policy-pap | [2024-04-10T23:14:39.415+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=a76d0949-b67a-4138-83db-75b75d7274ea, alive=false, publisher=null]]: starting 23:16:38 grafana | logger=query_data t=2024-04-10T23:14:09.459745039Z level=info msg="Query Service initialization" 23:16:38 kafka | [2024-04-10 23:14:40,259] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:38 policy-pap | [2024-04-10T23:14:39.415+00:00|INFO|ProducerConfig|main] ProducerConfig values: 23:16:38 grafana | logger=live.push_http t=2024-04-10T23:14:09.463581794Z level=info msg="Live Push Gateway initialization" 23:16:38 policy-pap | acks = -1 23:16:38 kafka | [2024-04-10 23:14:40,259] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:38 grafana | logger=ngalert.migration t=2024-04-10T23:14:09.473597634Z level=info msg=Starting 23:16:38 policy-pap | auto.include.jmx.reporter = true 23:16:38 policy-pap | batch.size = 16384 23:16:38 grafana | logger=ngalert.migration t=2024-04-10T23:14:09.474093036Z level=info msg="Applying transition" currentType=Legacy desiredType=UnifiedAlerting cleanOnDowngrade=false cleanOnUpgrade=false 23:16:38 policy-pap | bootstrap.servers = [kafka:9092] 23:16:38 grafana | logger=ngalert.migration orgID=1 t=2024-04-10T23:14:09.474729752Z level=info msg="Migrating alerts for organisation" 23:16:38 policy-pap | buffer.memory = 33554432 23:16:38 policy-pap | client.dns.lookup = use_all_dns_ips 23:16:38 grafana | logger=ngalert.migration orgID=1 t=2024-04-10T23:14:09.475601883Z level=info msg="Alerts found to migrate" alerts=0 23:16:38 policy-pap | client.id = producer-2 23:16:38 grafana | logger=ngalert.migration t=2024-04-10T23:14:09.478134686Z level=info msg="Completed alerting migration" 23:16:38 policy-pap | compression.type = none 23:16:38 grafana | logger=ngalert.state.manager t=2024-04-10T23:14:09.507053777Z level=info msg="Running in alternative execution of Error/NoData mode" 23:16:38 policy-pap | connections.max.idle.ms = 540000 23:16:38 grafana | logger=infra.usagestats.collector t=2024-04-10T23:14:09.509343614Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2 23:16:38 policy-pap | delivery.timeout.ms = 120000 23:16:38 grafana | logger=provisioning.datasources t=2024-04-10T23:14:09.511754224Z level=info msg="inserting datasource from configuration" name=PolicyPrometheus uid=dkSf71fnz 23:16:38 policy-pap | enable.idempotence = true 23:16:38 grafana | logger=provisioning.alerting t=2024-04-10T23:14:09.52566958Z level=info msg="starting to provision alerting" 23:16:38 policy-pap | interceptor.classes = [] 23:16:38 grafana | logger=provisioning.alerting t=2024-04-10T23:14:09.525685631Z level=info msg="finished to provision alerting" 23:16:38 policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer 23:16:38 grafana | logger=ngalert.state.manager t=2024-04-10T23:14:09.525944247Z level=info msg="Warming state cache for startup" 23:16:38 policy-pap | linger.ms = 0 23:16:38 grafana | logger=ngalert.state.manager t=2024-04-10T23:14:09.526531653Z level=info msg="State cache has been initialized" states=0 duration=586.486µs 23:16:38 policy-pap | max.block.ms = 60000 23:16:38 grafana | logger=ngalert.multiorg.alertmanager t=2024-04-10T23:14:09.526570884Z level=info msg="Starting MultiOrg Alertmanager" 23:16:38 policy-pap | max.in.flight.requests.per.connection = 5 23:16:38 grafana | logger=ngalert.scheduler t=2024-04-10T23:14:09.526614885Z level=info msg="Starting scheduler" tickInterval=10s maxAttempts=1 23:16:38 policy-pap | max.request.size = 1048576 23:16:38 grafana | logger=ticker t=2024-04-10T23:14:09.526677556Z level=info msg=starting first_tick=2024-04-10T23:14:10Z 23:16:38 policy-pap | metadata.max.age.ms = 300000 23:16:38 grafana | logger=http.server t=2024-04-10T23:14:09.528172183Z level=info msg="HTTP Server Listen" address=[::]:3000 protocol=http subUrl= socket= 23:16:38 policy-pap | metadata.max.idle.ms = 300000 23:16:38 grafana | logger=grafanaStorageLogger t=2024-04-10T23:14:09.528576393Z level=info msg="Storage starting" 23:16:38 policy-pap | metric.reporters = [] 23:16:38 grafana | logger=provisioning.dashboard t=2024-04-10T23:14:09.565804771Z level=info msg="starting to provision dashboards" 23:16:38 policy-pap | metrics.num.samples = 2 23:16:38 grafana | logger=plugins.update.checker t=2024-04-10T23:14:09.613435128Z level=info msg="Update check succeeded" duration=85.610644ms 23:16:38 policy-pap | metrics.recording.level = INFO 23:16:38 grafana | logger=grafana.update.checker t=2024-04-10T23:14:09.617864267Z level=info msg="Update check succeeded" duration=90.641318ms 23:16:38 policy-pap | metrics.sample.window.ms = 30000 23:16:38 grafana | logger=sqlstore.transactions t=2024-04-10T23:14:09.684444446Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked" 23:16:38 policy-pap | partitioner.adaptive.partitioning.enable = true 23:16:38 grafana | logger=sqlstore.transactions t=2024-04-10T23:14:09.696460245Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked" 23:16:38 policy-pap | partitioner.availability.timeout.ms = 0 23:16:38 grafana | logger=provisioning.dashboard t=2024-04-10T23:14:09.864969253Z level=info msg="finished to provision dashboards" 23:16:38 policy-pap | partitioner.class = null 23:16:38 grafana | logger=grafana-apiserver t=2024-04-10T23:14:10.038885936Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager" 23:16:38 policy-pap | partitioner.ignore.keys = false 23:16:38 grafana | logger=grafana-apiserver t=2024-04-10T23:14:10.039660246Z level=info msg="Adding GroupVersion featuretoggle.grafana.app v0alpha1 to ResourceManager" 23:16:38 grafana | logger=infra.usagestats t=2024-04-10T23:15:12.539637173Z level=info msg="Usage stats are ready to report" 23:16:38 policy-pap | receive.buffer.bytes = 32768 23:16:38 policy-pap | reconnect.backoff.max.ms = 1000 23:16:38 policy-pap | reconnect.backoff.ms = 50 23:16:38 policy-pap | request.timeout.ms = 30000 23:16:38 policy-pap | retries = 2147483647 23:16:38 policy-pap | retry.backoff.ms = 100 23:16:38 policy-pap | sasl.client.callback.handler.class = null 23:16:38 policy-pap | sasl.jaas.config = null 23:16:38 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:38 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 23:16:38 policy-pap | sasl.kerberos.service.name = null 23:16:38 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:38 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:38 policy-pap | sasl.login.callback.handler.class = null 23:16:38 policy-pap | sasl.login.class = null 23:16:38 policy-pap | sasl.login.connect.timeout.ms = null 23:16:38 policy-pap | sasl.login.read.timeout.ms = null 23:16:38 policy-pap | sasl.login.refresh.buffer.seconds = 300 23:16:38 policy-pap | sasl.login.refresh.min.period.seconds = 60 23:16:38 policy-pap | sasl.login.refresh.window.factor = 0.8 23:16:38 policy-pap | sasl.login.refresh.window.jitter = 0.05 23:16:38 policy-pap | sasl.login.retry.backoff.max.ms = 10000 23:16:38 policy-pap | sasl.login.retry.backoff.ms = 100 23:16:38 policy-pap | sasl.mechanism = GSSAPI 23:16:38 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 23:16:38 policy-pap | sasl.oauthbearer.expected.audience = null 23:16:38 policy-pap | sasl.oauthbearer.expected.issuer = null 23:16:38 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:38 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:38 kafka | [2024-04-10 23:14:40,259] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:38 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:38 kafka | [2024-04-10 23:14:40,259] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:38 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 23:16:38 kafka | [2024-04-10 23:14:40,259] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:38 policy-pap | sasl.oauthbearer.scope.claim.name = scope 23:16:38 kafka | [2024-04-10 23:14:40,259] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:38 policy-pap | sasl.oauthbearer.sub.claim.name = sub 23:16:38 kafka | [2024-04-10 23:14:40,260] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:38 policy-pap | sasl.oauthbearer.token.endpoint.url = null 23:16:38 kafka | [2024-04-10 23:14:40,260] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:38 policy-pap | security.protocol = PLAINTEXT 23:16:38 kafka | [2024-04-10 23:14:40,260] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:38 policy-pap | security.providers = null 23:16:38 kafka | [2024-04-10 23:14:40,260] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:38 policy-pap | send.buffer.bytes = 131072 23:16:38 kafka | [2024-04-10 23:14:40,304] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-3 (state.change.logger) 23:16:38 policy-pap | socket.connection.setup.timeout.max.ms = 30000 23:16:38 kafka | [2024-04-10 23:14:40,304] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-18 (state.change.logger) 23:16:38 policy-pap | socket.connection.setup.timeout.ms = 10000 23:16:38 kafka | [2024-04-10 23:14:40,304] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-41 (state.change.logger) 23:16:38 policy-pap | ssl.cipher.suites = null 23:16:38 kafka | [2024-04-10 23:14:40,304] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-10 (state.change.logger) 23:16:38 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:38 kafka | [2024-04-10 23:14:40,304] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-33 (state.change.logger) 23:16:38 policy-pap | ssl.endpoint.identification.algorithm = https 23:16:38 kafka | [2024-04-10 23:14:40,304] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-48 (state.change.logger) 23:16:38 policy-pap | ssl.engine.factory.class = null 23:16:38 kafka | [2024-04-10 23:14:40,304] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-19 (state.change.logger) 23:16:38 policy-pap | ssl.key.password = null 23:16:38 kafka | [2024-04-10 23:14:40,304] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-34 (state.change.logger) 23:16:38 policy-pap | ssl.keymanager.algorithm = SunX509 23:16:38 kafka | [2024-04-10 23:14:40,304] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-4 (state.change.logger) 23:16:38 policy-pap | ssl.keystore.certificate.chain = null 23:16:38 kafka | [2024-04-10 23:14:40,304] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-11 (state.change.logger) 23:16:38 policy-pap | ssl.keystore.key = null 23:16:38 kafka | [2024-04-10 23:14:40,304] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-26 (state.change.logger) 23:16:38 policy-pap | ssl.keystore.location = null 23:16:38 policy-pap | ssl.keystore.password = null 23:16:38 policy-pap | ssl.keystore.type = JKS 23:16:38 kafka | [2024-04-10 23:14:40,304] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-49 (state.change.logger) 23:16:38 policy-pap | ssl.protocol = TLSv1.3 23:16:38 kafka | [2024-04-10 23:14:40,304] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-39 (state.change.logger) 23:16:38 policy-pap | ssl.provider = null 23:16:38 kafka | [2024-04-10 23:14:40,304] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-9 (state.change.logger) 23:16:38 policy-pap | ssl.secure.random.implementation = null 23:16:38 kafka | [2024-04-10 23:14:40,304] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-24 (state.change.logger) 23:16:38 policy-pap | ssl.trustmanager.algorithm = PKIX 23:16:38 kafka | [2024-04-10 23:14:40,304] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-31 (state.change.logger) 23:16:38 policy-pap | ssl.truststore.certificates = null 23:16:38 kafka | [2024-04-10 23:14:40,304] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-46 (state.change.logger) 23:16:38 policy-pap | ssl.truststore.location = null 23:16:38 kafka | [2024-04-10 23:14:40,305] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-1 (state.change.logger) 23:16:38 policy-pap | ssl.truststore.password = null 23:16:38 kafka | [2024-04-10 23:14:40,305] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-16 (state.change.logger) 23:16:38 policy-pap | ssl.truststore.type = JKS 23:16:38 kafka | [2024-04-10 23:14:40,305] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-2 (state.change.logger) 23:16:38 policy-pap | transaction.timeout.ms = 60000 23:16:38 kafka | [2024-04-10 23:14:40,305] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-25 (state.change.logger) 23:16:38 policy-pap | transactional.id = null 23:16:38 kafka | [2024-04-10 23:14:40,305] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-40 (state.change.logger) 23:16:38 policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer 23:16:38 kafka | [2024-04-10 23:14:40,305] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-47 (state.change.logger) 23:16:38 policy-pap | 23:16:38 kafka | [2024-04-10 23:14:40,305] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-17 (state.change.logger) 23:16:38 policy-pap | [2024-04-10T23:14:39.416+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-2] Instantiated an idempotent producer. 23:16:38 kafka | [2024-04-10 23:14:40,305] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-32 (state.change.logger) 23:16:38 policy-pap | [2024-04-10T23:14:39.419+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 23:16:38 kafka | [2024-04-10 23:14:40,305] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-37 (state.change.logger) 23:16:38 policy-pap | [2024-04-10T23:14:39.419+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 23:16:38 kafka | [2024-04-10 23:14:40,305] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-7 (state.change.logger) 23:16:38 policy-pap | [2024-04-10T23:14:39.419+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1712790879419 23:16:38 kafka | [2024-04-10 23:14:40,305] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-22 (state.change.logger) 23:16:38 policy-pap | [2024-04-10T23:14:39.419+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=a76d0949-b67a-4138-83db-75b75d7274ea, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created 23:16:38 kafka | [2024-04-10 23:14:40,305] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-29 (state.change.logger) 23:16:38 policy-pap | [2024-04-10T23:14:39.419+00:00|INFO|ServiceManager|main] Policy PAP starting PAP Activator 23:16:38 kafka | [2024-04-10 23:14:40,305] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-44 (state.change.logger) 23:16:38 policy-pap | [2024-04-10T23:14:39.419+00:00|INFO|ServiceManager|main] Policy PAP starting PDP publisher 23:16:38 kafka | [2024-04-10 23:14:40,305] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-14 (state.change.logger) 23:16:38 policy-pap | [2024-04-10T23:14:39.421+00:00|INFO|ServiceManager|main] Policy PAP starting Policy Notification publisher 23:16:38 kafka | [2024-04-10 23:14:40,305] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-23 (state.change.logger) 23:16:38 policy-pap | [2024-04-10T23:14:39.422+00:00|INFO|ServiceManager|main] Policy PAP starting PDP update timers 23:16:38 kafka | [2024-04-10 23:14:40,305] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-38 (state.change.logger) 23:16:38 policy-pap | [2024-04-10T23:14:39.424+00:00|INFO|ServiceManager|main] Policy PAP starting PDP state-change timers 23:16:38 kafka | [2024-04-10 23:14:40,305] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-8 (state.change.logger) 23:16:38 policy-pap | [2024-04-10T23:14:39.424+00:00|INFO|TimerManager|Thread-9] timer manager update started 23:16:38 kafka | [2024-04-10 23:14:40,305] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) 23:16:38 policy-pap | [2024-04-10T23:14:39.428+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification lock 23:16:38 kafka | [2024-04-10 23:14:40,305] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-45 (state.change.logger) 23:16:38 policy-pap | [2024-04-10T23:14:39.428+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification requests 23:16:38 kafka | [2024-04-10 23:14:40,305] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-15 (state.change.logger) 23:16:38 policy-pap | [2024-04-10T23:14:39.429+00:00|INFO|TimerManager|Thread-10] timer manager state-change started 23:16:38 kafka | [2024-04-10 23:14:40,305] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-30 (state.change.logger) 23:16:38 policy-pap | [2024-04-10T23:14:39.429+00:00|INFO|ServiceManager|main] Policy PAP starting PDP expiration timer 23:16:38 kafka | [2024-04-10 23:14:40,305] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-0 (state.change.logger) 23:16:38 policy-pap | [2024-04-10T23:14:39.430+00:00|INFO|ServiceManager|main] Policy PAP started 23:16:38 kafka | [2024-04-10 23:14:40,305] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-35 (state.change.logger) 23:16:38 policy-pap | [2024-04-10T23:14:39.431+00:00|INFO|PolicyPapApplication|main] Started PolicyPapApplication in 11.708 seconds (process running for 12.366) 23:16:38 kafka | [2024-04-10 23:14:40,305] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-5 (state.change.logger) 23:16:38 policy-pap | [2024-04-10T23:14:39.937+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: mAFlxob1QoSnxAKM2SbgkA 23:16:38 kafka | [2024-04-10 23:14:40,305] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-20 (state.change.logger) 23:16:38 policy-pap | [2024-04-10T23:14:39.937+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 2 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} 23:16:38 kafka | [2024-04-10 23:14:40,305] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-27 (state.change.logger) 23:16:38 policy-pap | [2024-04-10T23:14:39.937+00:00|INFO|Metadata|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Cluster ID: mAFlxob1QoSnxAKM2SbgkA 23:16:38 kafka | [2024-04-10 23:14:40,305] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-42 (state.change.logger) 23:16:38 policy-pap | [2024-04-10T23:14:39.940+00:00|INFO|Metadata|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] Cluster ID: mAFlxob1QoSnxAKM2SbgkA 23:16:38 kafka | [2024-04-10 23:14:40,305] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-12 (state.change.logger) 23:16:38 policy-pap | [2024-04-10T23:14:39.988+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-9f4a6b38-834c-48e5-bf2a-977246f9eaf0-3, groupId=9f4a6b38-834c-48e5-bf2a-977246f9eaf0] Error while fetching metadata with correlation id 2 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:38 kafka | [2024-04-10 23:14:40,305] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-21 (state.change.logger) 23:16:38 policy-pap | [2024-04-10T23:14:39.989+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-9f4a6b38-834c-48e5-bf2a-977246f9eaf0-3, groupId=9f4a6b38-834c-48e5-bf2a-977246f9eaf0] Cluster ID: mAFlxob1QoSnxAKM2SbgkA 23:16:38 kafka | [2024-04-10 23:14:40,305] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-36 (state.change.logger) 23:16:38 policy-pap | [2024-04-10T23:14:40.036+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 4 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:38 policy-pap | [2024-04-10T23:14:40.066+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 0 with epoch 0 23:16:38 kafka | [2024-04-10 23:14:40,305] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-6 (state.change.logger) 23:16:38 policy-pap | [2024-04-10T23:14:40.070+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] ProducerId set to 1 with epoch 0 23:16:38 kafka | [2024-04-10 23:14:40,305] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-43 (state.change.logger) 23:16:38 policy-pap | [2024-04-10T23:14:40.106+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-9f4a6b38-834c-48e5-bf2a-977246f9eaf0-3, groupId=9f4a6b38-834c-48e5-bf2a-977246f9eaf0] Error while fetching metadata with correlation id 4 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:38 kafka | [2024-04-10 23:14:40,305] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-13 (state.change.logger) 23:16:38 policy-pap | [2024-04-10T23:14:40.176+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 6 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:38 kafka | [2024-04-10 23:14:40,305] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-28 (state.change.logger) 23:16:38 policy-pap | [2024-04-10T23:14:40.216+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-9f4a6b38-834c-48e5-bf2a-977246f9eaf0-3, groupId=9f4a6b38-834c-48e5-bf2a-977246f9eaf0] Error while fetching metadata with correlation id 6 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:38 kafka | [2024-04-10 23:14:40,307] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions HashSet(__consumer_offsets-22, __consumer_offsets-30, __consumer_offsets-25, __consumer_offsets-35, __consumer_offsets-38, __consumer_offsets-13, __consumer_offsets-8, __consumer_offsets-21, __consumer_offsets-4, __consumer_offsets-27, __consumer_offsets-7, __consumer_offsets-9, __consumer_offsets-46, __consumer_offsets-41, __consumer_offsets-33, __consumer_offsets-23, __consumer_offsets-49, __consumer_offsets-47, __consumer_offsets-16, __consumer_offsets-28, __consumer_offsets-31, __consumer_offsets-36, __consumer_offsets-42, __consumer_offsets-3, __consumer_offsets-18, __consumer_offsets-37, policy-pdp-pap-0, __consumer_offsets-15, __consumer_offsets-24, __consumer_offsets-17, __consumer_offsets-48, __consumer_offsets-19, __consumer_offsets-11, __consumer_offsets-2, __consumer_offsets-43, __consumer_offsets-6, __consumer_offsets-14, __consumer_offsets-20, __consumer_offsets-0, __consumer_offsets-44, __consumer_offsets-39, __consumer_offsets-12, __consumer_offsets-45, __consumer_offsets-1, __consumer_offsets-5, __consumer_offsets-26, __consumer_offsets-29, __consumer_offsets-34, __consumer_offsets-10, __consumer_offsets-32, __consumer_offsets-40) (kafka.server.ReplicaFetcherManager) 23:16:38 policy-pap | [2024-04-10T23:14:40.309+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 8 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:38 kafka | [2024-04-10 23:14:40,307] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 1 from controller 1 epoch 1 as part of the become-leader transition for 51 partitions (state.change.logger) 23:16:38 kafka | [2024-04-10 23:14:40,368] INFO [LogLoader partition=__consumer_offsets-3, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:38 policy-pap | [2024-04-10T23:14:40.327+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-9f4a6b38-834c-48e5-bf2a-977246f9eaf0-3, groupId=9f4a6b38-834c-48e5-bf2a-977246f9eaf0] Error while fetching metadata with correlation id 8 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:38 kafka | [2024-04-10 23:14:40,384] INFO Created log for partition __consumer_offsets-3 in /var/lib/kafka/data/__consumer_offsets-3 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:38 policy-pap | [2024-04-10T23:14:40.437+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 10 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:38 kafka | [2024-04-10 23:14:40,386] INFO [Partition __consumer_offsets-3 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-3 (kafka.cluster.Partition) 23:16:38 policy-pap | [2024-04-10T23:14:40.447+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-9f4a6b38-834c-48e5-bf2a-977246f9eaf0-3, groupId=9f4a6b38-834c-48e5-bf2a-977246f9eaf0] Error while fetching metadata with correlation id 10 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:38 kafka | [2024-04-10 23:14:40,387] INFO [Partition __consumer_offsets-3 broker=1] Log loaded for partition __consumer_offsets-3 with initial high watermark 0 (kafka.cluster.Partition) 23:16:38 policy-pap | [2024-04-10T23:14:40.544+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 12 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:38 kafka | [2024-04-10 23:14:40,389] INFO [Broker id=1] Leader __consumer_offsets-3 with topic id Some(d-olkgqFQhOA7vPVx66rWg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:38 policy-pap | [2024-04-10T23:14:40.556+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-9f4a6b38-834c-48e5-bf2a-977246f9eaf0-3, groupId=9f4a6b38-834c-48e5-bf2a-977246f9eaf0] Error while fetching metadata with correlation id 12 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:38 kafka | [2024-04-10 23:14:40,411] INFO [LogLoader partition=__consumer_offsets-18, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:38 policy-pap | [2024-04-10T23:14:40.670+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-9f4a6b38-834c-48e5-bf2a-977246f9eaf0-3, groupId=9f4a6b38-834c-48e5-bf2a-977246f9eaf0] Error while fetching metadata with correlation id 14 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} 23:16:38 kafka | [2024-04-10 23:14:40,412] INFO Created log for partition __consumer_offsets-18 in /var/lib/kafka/data/__consumer_offsets-18 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:38 policy-pap | [2024-04-10T23:14:40.679+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 14 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:38 kafka | [2024-04-10 23:14:40,413] INFO [Partition __consumer_offsets-18 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-18 (kafka.cluster.Partition) 23:16:38 policy-pap | [2024-04-10T23:14:40.779+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-9f4a6b38-834c-48e5-bf2a-977246f9eaf0-3, groupId=9f4a6b38-834c-48e5-bf2a-977246f9eaf0] Error while fetching metadata with correlation id 16 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:38 kafka | [2024-04-10 23:14:40,413] INFO [Partition __consumer_offsets-18 broker=1] Log loaded for partition __consumer_offsets-18 with initial high watermark 0 (kafka.cluster.Partition) 23:16:38 policy-pap | [2024-04-10T23:14:40.788+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 16 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:38 kafka | [2024-04-10 23:14:40,413] INFO [Broker id=1] Leader __consumer_offsets-18 with topic id Some(d-olkgqFQhOA7vPVx66rWg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:38 policy-pap | [2024-04-10T23:14:40.885+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-9f4a6b38-834c-48e5-bf2a-977246f9eaf0-3, groupId=9f4a6b38-834c-48e5-bf2a-977246f9eaf0] Error while fetching metadata with correlation id 18 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:38 kafka | [2024-04-10 23:14:40,448] INFO [LogLoader partition=__consumer_offsets-41, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:38 policy-pap | [2024-04-10T23:14:40.896+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 18 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:38 kafka | [2024-04-10 23:14:40,449] INFO Created log for partition __consumer_offsets-41 in /var/lib/kafka/data/__consumer_offsets-41 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:38 policy-pap | [2024-04-10T23:14:40.999+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-9f4a6b38-834c-48e5-bf2a-977246f9eaf0-3, groupId=9f4a6b38-834c-48e5-bf2a-977246f9eaf0] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) 23:16:38 kafka | [2024-04-10 23:14:40,449] INFO [Partition __consumer_offsets-41 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-41 (kafka.cluster.Partition) 23:16:38 policy-pap | [2024-04-10T23:14:41.007+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-9f4a6b38-834c-48e5-bf2a-977246f9eaf0-3, groupId=9f4a6b38-834c-48e5-bf2a-977246f9eaf0] (Re-)joining group 23:16:38 kafka | [2024-04-10 23:14:40,450] INFO [Partition __consumer_offsets-41 broker=1] Log loaded for partition __consumer_offsets-41 with initial high watermark 0 (kafka.cluster.Partition) 23:16:38 policy-pap | [2024-04-10T23:14:41.010+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) 23:16:38 kafka | [2024-04-10 23:14:40,450] INFO [Broker id=1] Leader __consumer_offsets-41 with topic id Some(d-olkgqFQhOA7vPVx66rWg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:38 policy-pap | [2024-04-10T23:14:41.012+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group 23:16:38 kafka | [2024-04-10 23:14:40,462] INFO [LogLoader partition=__consumer_offsets-10, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:38 policy-pap | [2024-04-10T23:14:41.043+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-9f4a6b38-834c-48e5-bf2a-977246f9eaf0-3, groupId=9f4a6b38-834c-48e5-bf2a-977246f9eaf0] Request joining group due to: need to re-join with the given member-id: consumer-9f4a6b38-834c-48e5-bf2a-977246f9eaf0-3-2fe3e0fc-c777-439d-89c2-ab7fb6462276 23:16:38 kafka | [2024-04-10 23:14:40,463] INFO Created log for partition __consumer_offsets-10 in /var/lib/kafka/data/__consumer_offsets-10 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:38 policy-pap | [2024-04-10T23:14:41.044+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-9f4a6b38-834c-48e5-bf2a-977246f9eaf0-3, groupId=9f4a6b38-834c-48e5-bf2a-977246f9eaf0] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) 23:16:38 kafka | [2024-04-10 23:14:40,463] INFO [Partition __consumer_offsets-10 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-10 (kafka.cluster.Partition) 23:16:38 policy-pap | [2024-04-10T23:14:41.044+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-9f4a6b38-834c-48e5-bf2a-977246f9eaf0-3, groupId=9f4a6b38-834c-48e5-bf2a-977246f9eaf0] (Re-)joining group 23:16:38 kafka | [2024-04-10 23:14:40,464] INFO [Partition __consumer_offsets-10 broker=1] Log loaded for partition __consumer_offsets-10 with initial high watermark 0 (kafka.cluster.Partition) 23:16:38 policy-pap | [2024-04-10T23:14:41.046+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: need to re-join with the given member-id: consumer-policy-pap-4-8421d98a-0932-4b9e-b25e-c675b33858f7 23:16:38 kafka | [2024-04-10 23:14:40,464] INFO [Broker id=1] Leader __consumer_offsets-10 with topic id Some(d-olkgqFQhOA7vPVx66rWg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:38 policy-pap | [2024-04-10T23:14:41.046+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) 23:16:38 kafka | [2024-04-10 23:14:40,475] INFO [LogLoader partition=__consumer_offsets-33, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:38 policy-pap | [2024-04-10T23:14:41.046+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group 23:16:38 kafka | [2024-04-10 23:14:40,476] INFO Created log for partition __consumer_offsets-33 in /var/lib/kafka/data/__consumer_offsets-33 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:38 policy-pap | [2024-04-10T23:14:41.591+00:00|INFO|[/policy/pap/v1]|http-nio-6969-exec-3] Initializing Spring DispatcherServlet 'dispatcherServlet' 23:16:38 kafka | [2024-04-10 23:14:40,476] INFO [Partition __consumer_offsets-33 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-33 (kafka.cluster.Partition) 23:16:38 policy-pap | [2024-04-10T23:14:41.591+00:00|INFO|DispatcherServlet|http-nio-6969-exec-3] Initializing Servlet 'dispatcherServlet' 23:16:38 kafka | [2024-04-10 23:14:40,476] INFO [Partition __consumer_offsets-33 broker=1] Log loaded for partition __consumer_offsets-33 with initial high watermark 0 (kafka.cluster.Partition) 23:16:38 policy-pap | [2024-04-10T23:14:41.594+00:00|INFO|DispatcherServlet|http-nio-6969-exec-3] Completed initialization in 3 ms 23:16:38 policy-pap | [2024-04-10T23:14:44.086+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-9f4a6b38-834c-48e5-bf2a-977246f9eaf0-3, groupId=9f4a6b38-834c-48e5-bf2a-977246f9eaf0] Successfully joined group with generation Generation{generationId=1, memberId='consumer-9f4a6b38-834c-48e5-bf2a-977246f9eaf0-3-2fe3e0fc-c777-439d-89c2-ab7fb6462276', protocol='range'} 23:16:38 kafka | [2024-04-10 23:14:40,476] INFO [Broker id=1] Leader __consumer_offsets-33 with topic id Some(d-olkgqFQhOA7vPVx66rWg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:38 policy-pap | [2024-04-10T23:14:44.095+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully joined group with generation Generation{generationId=1, memberId='consumer-policy-pap-4-8421d98a-0932-4b9e-b25e-c675b33858f7', protocol='range'} 23:16:38 kafka | [2024-04-10 23:14:40,484] INFO [LogLoader partition=__consumer_offsets-48, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:38 policy-pap | [2024-04-10T23:14:44.100+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-9f4a6b38-834c-48e5-bf2a-977246f9eaf0-3, groupId=9f4a6b38-834c-48e5-bf2a-977246f9eaf0] Finished assignment for group at generation 1: {consumer-9f4a6b38-834c-48e5-bf2a-977246f9eaf0-3-2fe3e0fc-c777-439d-89c2-ab7fb6462276=Assignment(partitions=[policy-pdp-pap-0])} 23:16:38 kafka | [2024-04-10 23:14:40,485] INFO Created log for partition __consumer_offsets-48 in /var/lib/kafka/data/__consumer_offsets-48 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:38 policy-pap | [2024-04-10T23:14:44.100+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Finished assignment for group at generation 1: {consumer-policy-pap-4-8421d98a-0932-4b9e-b25e-c675b33858f7=Assignment(partitions=[policy-pdp-pap-0])} 23:16:38 kafka | [2024-04-10 23:14:40,485] INFO [Partition __consumer_offsets-48 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-48 (kafka.cluster.Partition) 23:16:38 policy-pap | [2024-04-10T23:14:44.133+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-9f4a6b38-834c-48e5-bf2a-977246f9eaf0-3, groupId=9f4a6b38-834c-48e5-bf2a-977246f9eaf0] Successfully synced group in generation Generation{generationId=1, memberId='consumer-9f4a6b38-834c-48e5-bf2a-977246f9eaf0-3-2fe3e0fc-c777-439d-89c2-ab7fb6462276', protocol='range'} 23:16:38 kafka | [2024-04-10 23:14:40,486] INFO [Partition __consumer_offsets-48 broker=1] Log loaded for partition __consumer_offsets-48 with initial high watermark 0 (kafka.cluster.Partition) 23:16:38 kafka | [2024-04-10 23:14:40,486] INFO [Broker id=1] Leader __consumer_offsets-48 with topic id Some(d-olkgqFQhOA7vPVx66rWg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:38 kafka | [2024-04-10 23:14:40,495] INFO [LogLoader partition=__consumer_offsets-19, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:38 policy-pap | [2024-04-10T23:14:44.133+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully synced group in generation Generation{generationId=1, memberId='consumer-policy-pap-4-8421d98a-0932-4b9e-b25e-c675b33858f7', protocol='range'} 23:16:38 kafka | [2024-04-10 23:14:40,496] INFO Created log for partition __consumer_offsets-19 in /var/lib/kafka/data/__consumer_offsets-19 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:38 policy-pap | [2024-04-10T23:14:44.134+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) 23:16:38 kafka | [2024-04-10 23:14:40,496] INFO [Partition __consumer_offsets-19 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-19 (kafka.cluster.Partition) 23:16:38 policy-pap | [2024-04-10T23:14:44.134+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-9f4a6b38-834c-48e5-bf2a-977246f9eaf0-3, groupId=9f4a6b38-834c-48e5-bf2a-977246f9eaf0] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) 23:16:38 kafka | [2024-04-10 23:14:40,496] INFO [Partition __consumer_offsets-19 broker=1] Log loaded for partition __consumer_offsets-19 with initial high watermark 0 (kafka.cluster.Partition) 23:16:38 policy-pap | [2024-04-10T23:14:44.142+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Adding newly assigned partitions: policy-pdp-pap-0 23:16:38 kafka | [2024-04-10 23:14:40,497] INFO [Broker id=1] Leader __consumer_offsets-19 with topic id Some(d-olkgqFQhOA7vPVx66rWg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:38 policy-pap | [2024-04-10T23:14:44.142+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-9f4a6b38-834c-48e5-bf2a-977246f9eaf0-3, groupId=9f4a6b38-834c-48e5-bf2a-977246f9eaf0] Adding newly assigned partitions: policy-pdp-pap-0 23:16:38 kafka | [2024-04-10 23:14:40,504] INFO [LogLoader partition=__consumer_offsets-34, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:38 policy-pap | [2024-04-10T23:14:44.169+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-9f4a6b38-834c-48e5-bf2a-977246f9eaf0-3, groupId=9f4a6b38-834c-48e5-bf2a-977246f9eaf0] Found no committed offset for partition policy-pdp-pap-0 23:16:38 kafka | [2024-04-10 23:14:40,504] INFO Created log for partition __consumer_offsets-34 in /var/lib/kafka/data/__consumer_offsets-34 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:38 policy-pap | [2024-04-10T23:14:44.169+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Found no committed offset for partition policy-pdp-pap-0 23:16:38 kafka | [2024-04-10 23:14:40,504] INFO [Partition __consumer_offsets-34 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-34 (kafka.cluster.Partition) 23:16:38 policy-pap | [2024-04-10T23:14:44.192+00:00|INFO|SubscriptionState|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. 23:16:38 kafka | [2024-04-10 23:14:40,504] INFO [Partition __consumer_offsets-34 broker=1] Log loaded for partition __consumer_offsets-34 with initial high watermark 0 (kafka.cluster.Partition) 23:16:38 policy-pap | [2024-04-10T23:14:44.192+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-9f4a6b38-834c-48e5-bf2a-977246f9eaf0-3, groupId=9f4a6b38-834c-48e5-bf2a-977246f9eaf0] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. 23:16:38 kafka | [2024-04-10 23:14:40,505] INFO [Broker id=1] Leader __consumer_offsets-34 with topic id Some(d-olkgqFQhOA7vPVx66rWg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:38 policy-pap | [2024-04-10T23:15:01.557+00:00|INFO|OrderedServiceImpl|KAFKA-source-policy-heartbeat] ***** OrderedServiceImpl implementers: 23:16:38 kafka | [2024-04-10 23:14:40,513] INFO [LogLoader partition=__consumer_offsets-4, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:38 policy-pap | [] 23:16:38 kafka | [2024-04-10 23:14:40,514] INFO Created log for partition __consumer_offsets-4 in /var/lib/kafka/data/__consumer_offsets-4 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:38 policy-pap | [2024-04-10T23:15:01.558+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:16:38 kafka | [2024-04-10 23:14:40,514] INFO [Partition __consumer_offsets-4 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-4 (kafka.cluster.Partition) 23:16:38 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"6d8dce48-0b3f-4528-9899-b13742555876","timestampMs":1712790901512,"name":"apex-89e6e63c-e6e7-4365-a78f-3939e2d58a8b","pdpGroup":"defaultGroup"} 23:16:38 kafka | [2024-04-10 23:14:40,514] INFO [Partition __consumer_offsets-4 broker=1] Log loaded for partition __consumer_offsets-4 with initial high watermark 0 (kafka.cluster.Partition) 23:16:38 policy-pap | [2024-04-10T23:15:01.560+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:38 kafka | [2024-04-10 23:14:40,515] INFO [Broker id=1] Leader __consumer_offsets-4 with topic id Some(d-olkgqFQhOA7vPVx66rWg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:38 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"6d8dce48-0b3f-4528-9899-b13742555876","timestampMs":1712790901512,"name":"apex-89e6e63c-e6e7-4365-a78f-3939e2d58a8b","pdpGroup":"defaultGroup"} 23:16:38 kafka | [2024-04-10 23:14:40,525] INFO [LogLoader partition=__consumer_offsets-11, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:38 policy-pap | [2024-04-10T23:15:01.568+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus 23:16:38 kafka | [2024-04-10 23:14:40,526] INFO Created log for partition __consumer_offsets-11 in /var/lib/kafka/data/__consumer_offsets-11 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:38 policy-pap | [2024-04-10T23:15:01.652+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-89e6e63c-e6e7-4365-a78f-3939e2d58a8b PdpUpdate starting 23:16:38 kafka | [2024-04-10 23:14:40,526] INFO [Partition __consumer_offsets-11 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-11 (kafka.cluster.Partition) 23:16:38 policy-pap | [2024-04-10T23:15:01.652+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-89e6e63c-e6e7-4365-a78f-3939e2d58a8b PdpUpdate starting listener 23:16:38 kafka | [2024-04-10 23:14:40,526] INFO [Partition __consumer_offsets-11 broker=1] Log loaded for partition __consumer_offsets-11 with initial high watermark 0 (kafka.cluster.Partition) 23:16:38 policy-pap | [2024-04-10T23:15:01.652+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-89e6e63c-e6e7-4365-a78f-3939e2d58a8b PdpUpdate starting timer 23:16:38 kafka | [2024-04-10 23:14:40,526] INFO [Broker id=1] Leader __consumer_offsets-11 with topic id Some(d-olkgqFQhOA7vPVx66rWg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:38 policy-pap | [2024-04-10T23:15:01.653+00:00|INFO|TimerManager|KAFKA-source-policy-heartbeat] update timer registered Timer [name=43b7d102-6385-499f-be91-353062d39071, expireMs=1712790931653] 23:16:38 kafka | [2024-04-10 23:14:40,537] INFO [LogLoader partition=__consumer_offsets-26, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:38 policy-pap | [2024-04-10T23:15:01.654+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-89e6e63c-e6e7-4365-a78f-3939e2d58a8b PdpUpdate starting enqueue 23:16:38 kafka | [2024-04-10 23:14:40,538] INFO Created log for partition __consumer_offsets-26 in /var/lib/kafka/data/__consumer_offsets-26 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:38 policy-pap | [2024-04-10T23:15:01.655+00:00|INFO|TimerManager|Thread-9] update timer waiting 29999ms Timer [name=43b7d102-6385-499f-be91-353062d39071, expireMs=1712790931653] 23:16:38 kafka | [2024-04-10 23:14:40,538] INFO [Partition __consumer_offsets-26 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-26 (kafka.cluster.Partition) 23:16:38 policy-pap | [2024-04-10T23:15:01.656+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 23:16:38 kafka | [2024-04-10 23:14:40,538] INFO [Partition __consumer_offsets-26 broker=1] Log loaded for partition __consumer_offsets-26 with initial high watermark 0 (kafka.cluster.Partition) 23:16:38 policy-pap | {"source":"pap-f0ecc202-f082-45c7-b7f8-f2f10d3ef31a","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"43b7d102-6385-499f-be91-353062d39071","timestampMs":1712790901635,"name":"apex-89e6e63c-e6e7-4365-a78f-3939e2d58a8b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:38 kafka | [2024-04-10 23:14:40,539] INFO [Broker id=1] Leader __consumer_offsets-26 with topic id Some(d-olkgqFQhOA7vPVx66rWg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:38 policy-pap | [2024-04-10T23:15:01.658+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-89e6e63c-e6e7-4365-a78f-3939e2d58a8b PdpUpdate started 23:16:38 kafka | [2024-04-10 23:14:40,549] INFO [LogLoader partition=__consumer_offsets-49, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:38 policy-pap | [2024-04-10T23:15:01.704+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:16:38 kafka | [2024-04-10 23:14:40,550] INFO Created log for partition __consumer_offsets-49 in /var/lib/kafka/data/__consumer_offsets-49 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:38 policy-pap | {"source":"pap-f0ecc202-f082-45c7-b7f8-f2f10d3ef31a","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"43b7d102-6385-499f-be91-353062d39071","timestampMs":1712790901635,"name":"apex-89e6e63c-e6e7-4365-a78f-3939e2d58a8b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:38 kafka | [2024-04-10 23:14:40,550] INFO [Partition __consumer_offsets-49 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-49 (kafka.cluster.Partition) 23:16:38 policy-pap | [2024-04-10T23:15:01.705+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE 23:16:38 kafka | [2024-04-10 23:14:40,550] INFO [Partition __consumer_offsets-49 broker=1] Log loaded for partition __consumer_offsets-49 with initial high watermark 0 (kafka.cluster.Partition) 23:16:38 policy-pap | [2024-04-10T23:15:01.711+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:38 kafka | [2024-04-10 23:14:40,550] INFO [Broker id=1] Leader __consumer_offsets-49 with topic id Some(d-olkgqFQhOA7vPVx66rWg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:38 policy-pap | {"source":"pap-f0ecc202-f082-45c7-b7f8-f2f10d3ef31a","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"43b7d102-6385-499f-be91-353062d39071","timestampMs":1712790901635,"name":"apex-89e6e63c-e6e7-4365-a78f-3939e2d58a8b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:38 kafka | [2024-04-10 23:14:40,560] INFO [LogLoader partition=__consumer_offsets-39, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:38 policy-pap | [2024-04-10T23:15:01.711+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE 23:16:38 kafka | [2024-04-10 23:14:40,563] INFO Created log for partition __consumer_offsets-39 in /var/lib/kafka/data/__consumer_offsets-39 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:38 policy-pap | [2024-04-10T23:15:01.731+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:16:38 kafka | [2024-04-10 23:14:40,564] INFO [Partition __consumer_offsets-39 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-39 (kafka.cluster.Partition) 23:16:38 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"32d81460-be72-404b-b9df-8a5e51bd36ef","timestampMs":1712790901718,"name":"apex-89e6e63c-e6e7-4365-a78f-3939e2d58a8b","pdpGroup":"defaultGroup"} 23:16:38 kafka | [2024-04-10 23:14:40,564] INFO [Partition __consumer_offsets-39 broker=1] Log loaded for partition __consumer_offsets-39 with initial high watermark 0 (kafka.cluster.Partition) 23:16:38 policy-pap | [2024-04-10T23:15:01.733+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:38 kafka | [2024-04-10 23:14:40,564] INFO [Broker id=1] Leader __consumer_offsets-39 with topic id Some(d-olkgqFQhOA7vPVx66rWg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:38 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"32d81460-be72-404b-b9df-8a5e51bd36ef","timestampMs":1712790901718,"name":"apex-89e6e63c-e6e7-4365-a78f-3939e2d58a8b","pdpGroup":"defaultGroup"} 23:16:38 kafka | [2024-04-10 23:14:40,572] INFO [LogLoader partition=__consumer_offsets-9, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:38 policy-pap | [2024-04-10T23:15:01.734+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus 23:16:38 kafka | [2024-04-10 23:14:40,574] INFO Created log for partition __consumer_offsets-9 in /var/lib/kafka/data/__consumer_offsets-9 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:38 policy-pap | [2024-04-10T23:15:01.738+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:38 kafka | [2024-04-10 23:14:40,574] INFO [Partition __consumer_offsets-9 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-9 (kafka.cluster.Partition) 23:16:38 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"43b7d102-6385-499f-be91-353062d39071","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"a7abe712-a48f-4ca3-9540-31399f5f2837","timestampMs":1712790901720,"name":"apex-89e6e63c-e6e7-4365-a78f-3939e2d58a8b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:38 kafka | [2024-04-10 23:14:40,574] INFO [Partition __consumer_offsets-9 broker=1] Log loaded for partition __consumer_offsets-9 with initial high watermark 0 (kafka.cluster.Partition) 23:16:38 policy-pap | [2024-04-10T23:15:01.762+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-89e6e63c-e6e7-4365-a78f-3939e2d58a8b PdpUpdate stopping 23:16:38 kafka | [2024-04-10 23:14:40,574] INFO [Broker id=1] Leader __consumer_offsets-9 with topic id Some(d-olkgqFQhOA7vPVx66rWg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:38 policy-pap | [2024-04-10T23:15:01.763+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-89e6e63c-e6e7-4365-a78f-3939e2d58a8b PdpUpdate stopping enqueue 23:16:38 kafka | [2024-04-10 23:14:40,582] INFO [LogLoader partition=__consumer_offsets-24, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:38 kafka | [2024-04-10 23:14:40,583] INFO Created log for partition __consumer_offsets-24 in /var/lib/kafka/data/__consumer_offsets-24 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:38 policy-pap | [2024-04-10T23:15:01.763+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-89e6e63c-e6e7-4365-a78f-3939e2d58a8b PdpUpdate stopping timer 23:16:38 kafka | [2024-04-10 23:14:40,583] INFO [Partition __consumer_offsets-24 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-24 (kafka.cluster.Partition) 23:16:38 policy-pap | [2024-04-10T23:15:01.763+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=43b7d102-6385-499f-be91-353062d39071, expireMs=1712790931653] 23:16:38 kafka | [2024-04-10 23:14:40,584] INFO [Partition __consumer_offsets-24 broker=1] Log loaded for partition __consumer_offsets-24 with initial high watermark 0 (kafka.cluster.Partition) 23:16:38 policy-pap | [2024-04-10T23:15:01.763+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-89e6e63c-e6e7-4365-a78f-3939e2d58a8b PdpUpdate stopping listener 23:16:38 kafka | [2024-04-10 23:14:40,584] INFO [Broker id=1] Leader __consumer_offsets-24 with topic id Some(d-olkgqFQhOA7vPVx66rWg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:38 policy-pap | [2024-04-10T23:15:01.763+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-89e6e63c-e6e7-4365-a78f-3939e2d58a8b PdpUpdate stopped 23:16:38 kafka | [2024-04-10 23:14:40,595] INFO [LogLoader partition=__consumer_offsets-31, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:38 policy-pap | [2024-04-10T23:15:01.767+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-89e6e63c-e6e7-4365-a78f-3939e2d58a8b PdpUpdate successful 23:16:38 kafka | [2024-04-10 23:14:40,596] INFO Created log for partition __consumer_offsets-31 in /var/lib/kafka/data/__consumer_offsets-31 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:38 policy-pap | [2024-04-10T23:15:01.767+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-89e6e63c-e6e7-4365-a78f-3939e2d58a8b start publishing next request 23:16:38 kafka | [2024-04-10 23:14:40,596] INFO [Partition __consumer_offsets-31 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-31 (kafka.cluster.Partition) 23:16:38 policy-pap | [2024-04-10T23:15:01.767+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-89e6e63c-e6e7-4365-a78f-3939e2d58a8b PdpStateChange starting 23:16:38 kafka | [2024-04-10 23:14:40,596] INFO [Partition __consumer_offsets-31 broker=1] Log loaded for partition __consumer_offsets-31 with initial high watermark 0 (kafka.cluster.Partition) 23:16:38 policy-pap | [2024-04-10T23:15:01.767+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-89e6e63c-e6e7-4365-a78f-3939e2d58a8b PdpStateChange starting listener 23:16:38 kafka | [2024-04-10 23:14:40,596] INFO [Broker id=1] Leader __consumer_offsets-31 with topic id Some(d-olkgqFQhOA7vPVx66rWg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:38 policy-pap | [2024-04-10T23:15:01.767+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-89e6e63c-e6e7-4365-a78f-3939e2d58a8b PdpStateChange starting timer 23:16:38 kafka | [2024-04-10 23:14:40,606] INFO [LogLoader partition=__consumer_offsets-46, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:38 policy-pap | [2024-04-10T23:15:01.767+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer registered Timer [name=be191d72-cf37-4b38-8bd3-f09869418d7b, expireMs=1712790931767] 23:16:38 kafka | [2024-04-10 23:14:40,606] INFO Created log for partition __consumer_offsets-46 in /var/lib/kafka/data/__consumer_offsets-46 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:38 policy-pap | [2024-04-10T23:15:01.767+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-89e6e63c-e6e7-4365-a78f-3939e2d58a8b PdpStateChange starting enqueue 23:16:38 kafka | [2024-04-10 23:14:40,606] INFO [Partition __consumer_offsets-46 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-46 (kafka.cluster.Partition) 23:16:38 policy-pap | [2024-04-10T23:15:01.768+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-89e6e63c-e6e7-4365-a78f-3939e2d58a8b PdpStateChange started 23:16:38 kafka | [2024-04-10 23:14:40,606] INFO [Partition __consumer_offsets-46 broker=1] Log loaded for partition __consumer_offsets-46 with initial high watermark 0 (kafka.cluster.Partition) 23:16:38 policy-pap | [2024-04-10T23:15:01.768+00:00|INFO|TimerManager|Thread-10] state-change timer waiting 29999ms Timer [name=be191d72-cf37-4b38-8bd3-f09869418d7b, expireMs=1712790931767] 23:16:38 kafka | [2024-04-10 23:14:40,606] INFO [Broker id=1] Leader __consumer_offsets-46 with topic id Some(d-olkgqFQhOA7vPVx66rWg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:38 policy-pap | [2024-04-10T23:15:01.768+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 23:16:38 kafka | [2024-04-10 23:14:40,614] INFO [LogLoader partition=__consumer_offsets-1, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:38 policy-pap | {"source":"pap-f0ecc202-f082-45c7-b7f8-f2f10d3ef31a","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"be191d72-cf37-4b38-8bd3-f09869418d7b","timestampMs":1712790901636,"name":"apex-89e6e63c-e6e7-4365-a78f-3939e2d58a8b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:38 kafka | [2024-04-10 23:14:40,614] INFO Created log for partition __consumer_offsets-1 in /var/lib/kafka/data/__consumer_offsets-1 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:38 policy-pap | [2024-04-10T23:15:01.783+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:16:38 kafka | [2024-04-10 23:14:40,614] INFO [Partition __consumer_offsets-1 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-1 (kafka.cluster.Partition) 23:16:38 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"43b7d102-6385-499f-be91-353062d39071","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"a7abe712-a48f-4ca3-9540-31399f5f2837","timestampMs":1712790901720,"name":"apex-89e6e63c-e6e7-4365-a78f-3939e2d58a8b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:38 kafka | [2024-04-10 23:14:40,614] INFO [Partition __consumer_offsets-1 broker=1] Log loaded for partition __consumer_offsets-1 with initial high watermark 0 (kafka.cluster.Partition) 23:16:38 policy-pap | [2024-04-10T23:15:01.785+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 43b7d102-6385-499f-be91-353062d39071 23:16:38 kafka | [2024-04-10 23:14:40,614] INFO [Broker id=1] Leader __consumer_offsets-1 with topic id Some(d-olkgqFQhOA7vPVx66rWg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:38 policy-pap | [2024-04-10T23:15:01.798+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:16:38 kafka | [2024-04-10 23:14:40,622] INFO [LogLoader partition=__consumer_offsets-16, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:38 policy-pap | {"source":"pap-f0ecc202-f082-45c7-b7f8-f2f10d3ef31a","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"be191d72-cf37-4b38-8bd3-f09869418d7b","timestampMs":1712790901636,"name":"apex-89e6e63c-e6e7-4365-a78f-3939e2d58a8b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:38 kafka | [2024-04-10 23:14:40,624] INFO Created log for partition __consumer_offsets-16 in /var/lib/kafka/data/__consumer_offsets-16 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:38 policy-pap | [2024-04-10T23:15:01.799+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_STATE_CHANGE 23:16:38 kafka | [2024-04-10 23:14:40,624] INFO [Partition __consumer_offsets-16 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-16 (kafka.cluster.Partition) 23:16:38 policy-pap | [2024-04-10T23:15:01.805+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:16:38 kafka | [2024-04-10 23:14:40,624] INFO [Partition __consumer_offsets-16 broker=1] Log loaded for partition __consumer_offsets-16 with initial high watermark 0 (kafka.cluster.Partition) 23:16:38 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"be191d72-cf37-4b38-8bd3-f09869418d7b","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"888e43f0-64dd-400e-978d-6aff9547a801","timestampMs":1712790901786,"name":"apex-89e6e63c-e6e7-4365-a78f-3939e2d58a8b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:38 kafka | [2024-04-10 23:14:40,624] INFO [Broker id=1] Leader __consumer_offsets-16 with topic id Some(d-olkgqFQhOA7vPVx66rWg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:38 policy-pap | [2024-04-10T23:15:01.806+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id be191d72-cf37-4b38-8bd3-f09869418d7b 23:16:38 kafka | [2024-04-10 23:14:40,631] INFO [LogLoader partition=__consumer_offsets-2, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:38 policy-pap | [2024-04-10T23:15:01.816+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:38 kafka | [2024-04-10 23:14:40,632] INFO Created log for partition __consumer_offsets-2 in /var/lib/kafka/data/__consumer_offsets-2 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:38 policy-pap | {"source":"pap-f0ecc202-f082-45c7-b7f8-f2f10d3ef31a","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"be191d72-cf37-4b38-8bd3-f09869418d7b","timestampMs":1712790901636,"name":"apex-89e6e63c-e6e7-4365-a78f-3939e2d58a8b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:38 kafka | [2024-04-10 23:14:40,632] INFO [Partition __consumer_offsets-2 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-2 (kafka.cluster.Partition) 23:16:38 policy-pap | [2024-04-10T23:15:01.817+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATE_CHANGE 23:16:38 kafka | [2024-04-10 23:14:40,632] INFO [Partition __consumer_offsets-2 broker=1] Log loaded for partition __consumer_offsets-2 with initial high watermark 0 (kafka.cluster.Partition) 23:16:38 policy-pap | [2024-04-10T23:15:01.820+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:38 kafka | [2024-04-10 23:14:40,632] INFO [Broker id=1] Leader __consumer_offsets-2 with topic id Some(d-olkgqFQhOA7vPVx66rWg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:38 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"be191d72-cf37-4b38-8bd3-f09869418d7b","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"888e43f0-64dd-400e-978d-6aff9547a801","timestampMs":1712790901786,"name":"apex-89e6e63c-e6e7-4365-a78f-3939e2d58a8b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:38 kafka | [2024-04-10 23:14:40,640] INFO [LogLoader partition=__consumer_offsets-25, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:38 policy-pap | [2024-04-10T23:15:01.820+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-89e6e63c-e6e7-4365-a78f-3939e2d58a8b PdpStateChange stopping 23:16:38 kafka | [2024-04-10 23:14:40,640] INFO Created log for partition __consumer_offsets-25 in /var/lib/kafka/data/__consumer_offsets-25 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:38 policy-pap | [2024-04-10T23:15:01.821+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-89e6e63c-e6e7-4365-a78f-3939e2d58a8b PdpStateChange stopping enqueue 23:16:38 kafka | [2024-04-10 23:14:40,640] INFO [Partition __consumer_offsets-25 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-25 (kafka.cluster.Partition) 23:16:38 policy-pap | [2024-04-10T23:15:01.821+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-89e6e63c-e6e7-4365-a78f-3939e2d58a8b PdpStateChange stopping timer 23:16:38 kafka | [2024-04-10 23:14:40,640] INFO [Partition __consumer_offsets-25 broker=1] Log loaded for partition __consumer_offsets-25 with initial high watermark 0 (kafka.cluster.Partition) 23:16:38 policy-pap | [2024-04-10T23:15:01.821+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer cancelled Timer [name=be191d72-cf37-4b38-8bd3-f09869418d7b, expireMs=1712790931767] 23:16:38 kafka | [2024-04-10 23:14:40,641] INFO [Broker id=1] Leader __consumer_offsets-25 with topic id Some(d-olkgqFQhOA7vPVx66rWg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:38 policy-pap | [2024-04-10T23:15:01.821+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-89e6e63c-e6e7-4365-a78f-3939e2d58a8b PdpStateChange stopping listener 23:16:38 kafka | [2024-04-10 23:14:40,682] INFO [LogLoader partition=__consumer_offsets-40, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:38 policy-pap | [2024-04-10T23:15:01.821+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-89e6e63c-e6e7-4365-a78f-3939e2d58a8b PdpStateChange stopped 23:16:38 kafka | [2024-04-10 23:14:40,683] INFO Created log for partition __consumer_offsets-40 in /var/lib/kafka/data/__consumer_offsets-40 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:38 policy-pap | [2024-04-10T23:15:01.821+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-89e6e63c-e6e7-4365-a78f-3939e2d58a8b PdpStateChange successful 23:16:38 kafka | [2024-04-10 23:14:40,683] INFO [Partition __consumer_offsets-40 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-40 (kafka.cluster.Partition) 23:16:38 policy-pap | [2024-04-10T23:15:01.821+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-89e6e63c-e6e7-4365-a78f-3939e2d58a8b start publishing next request 23:16:38 kafka | [2024-04-10 23:14:40,683] INFO [Partition __consumer_offsets-40 broker=1] Log loaded for partition __consumer_offsets-40 with initial high watermark 0 (kafka.cluster.Partition) 23:16:38 policy-pap | [2024-04-10T23:15:01.821+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-89e6e63c-e6e7-4365-a78f-3939e2d58a8b PdpUpdate starting 23:16:38 kafka | [2024-04-10 23:14:40,683] INFO [Broker id=1] Leader __consumer_offsets-40 with topic id Some(d-olkgqFQhOA7vPVx66rWg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:38 policy-pap | [2024-04-10T23:15:01.821+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-89e6e63c-e6e7-4365-a78f-3939e2d58a8b PdpUpdate starting listener 23:16:38 kafka | [2024-04-10 23:14:40,693] INFO [LogLoader partition=__consumer_offsets-47, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:38 policy-pap | [2024-04-10T23:15:01.821+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-89e6e63c-e6e7-4365-a78f-3939e2d58a8b PdpUpdate starting timer 23:16:38 kafka | [2024-04-10 23:14:40,694] INFO Created log for partition __consumer_offsets-47 in /var/lib/kafka/data/__consumer_offsets-47 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:38 policy-pap | [2024-04-10T23:15:01.821+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer registered Timer [name=472c3b39-181b-4e95-9577-e8759534697c, expireMs=1712790931821] 23:16:38 kafka | [2024-04-10 23:14:40,694] INFO [Partition __consumer_offsets-47 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-47 (kafka.cluster.Partition) 23:16:38 policy-pap | [2024-04-10T23:15:01.821+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-89e6e63c-e6e7-4365-a78f-3939e2d58a8b PdpUpdate starting enqueue 23:16:38 kafka | [2024-04-10 23:14:40,694] INFO [Partition __consumer_offsets-47 broker=1] Log loaded for partition __consumer_offsets-47 with initial high watermark 0 (kafka.cluster.Partition) 23:16:38 policy-pap | [2024-04-10T23:15:01.821+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-89e6e63c-e6e7-4365-a78f-3939e2d58a8b PdpUpdate started 23:16:38 kafka | [2024-04-10 23:14:40,695] INFO [Broker id=1] Leader __consumer_offsets-47 with topic id Some(d-olkgqFQhOA7vPVx66rWg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:38 kafka | [2024-04-10 23:14:40,701] INFO [LogLoader partition=__consumer_offsets-17, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:38 policy-pap | [2024-04-10T23:15:01.822+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 23:16:38 kafka | [2024-04-10 23:14:40,701] INFO Created log for partition __consumer_offsets-17 in /var/lib/kafka/data/__consumer_offsets-17 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:38 policy-pap | {"source":"pap-f0ecc202-f082-45c7-b7f8-f2f10d3ef31a","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"472c3b39-181b-4e95-9577-e8759534697c","timestampMs":1712790901807,"name":"apex-89e6e63c-e6e7-4365-a78f-3939e2d58a8b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:38 kafka | [2024-04-10 23:14:40,701] INFO [Partition __consumer_offsets-17 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-17 (kafka.cluster.Partition) 23:16:38 policy-pap | [2024-04-10T23:15:01.831+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:16:38 kafka | [2024-04-10 23:14:40,701] INFO [Partition __consumer_offsets-17 broker=1] Log loaded for partition __consumer_offsets-17 with initial high watermark 0 (kafka.cluster.Partition) 23:16:38 policy-pap | {"source":"pap-f0ecc202-f082-45c7-b7f8-f2f10d3ef31a","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"472c3b39-181b-4e95-9577-e8759534697c","timestampMs":1712790901807,"name":"apex-89e6e63c-e6e7-4365-a78f-3939e2d58a8b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:38 kafka | [2024-04-10 23:14:40,702] INFO [Broker id=1] Leader __consumer_offsets-17 with topic id Some(d-olkgqFQhOA7vPVx66rWg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:38 policy-pap | [2024-04-10T23:15:01.831+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE 23:16:38 kafka | [2024-04-10 23:14:40,709] INFO [LogLoader partition=__consumer_offsets-32, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:38 policy-pap | [2024-04-10T23:15:01.831+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:38 kafka | [2024-04-10 23:14:40,709] INFO Created log for partition __consumer_offsets-32 in /var/lib/kafka/data/__consumer_offsets-32 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:38 policy-pap | {"source":"pap-f0ecc202-f082-45c7-b7f8-f2f10d3ef31a","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"472c3b39-181b-4e95-9577-e8759534697c","timestampMs":1712790901807,"name":"apex-89e6e63c-e6e7-4365-a78f-3939e2d58a8b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:38 kafka | [2024-04-10 23:14:40,709] INFO [Partition __consumer_offsets-32 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-32 (kafka.cluster.Partition) 23:16:38 policy-pap | [2024-04-10T23:15:01.832+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE 23:16:38 kafka | [2024-04-10 23:14:40,710] INFO [Partition __consumer_offsets-32 broker=1] Log loaded for partition __consumer_offsets-32 with initial high watermark 0 (kafka.cluster.Partition) 23:16:38 policy-pap | [2024-04-10T23:15:01.842+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:38 kafka | [2024-04-10 23:14:40,710] INFO [Broker id=1] Leader __consumer_offsets-32 with topic id Some(d-olkgqFQhOA7vPVx66rWg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:38 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"472c3b39-181b-4e95-9577-e8759534697c","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"7f9ad840-c145-4eb1-87ea-e490e44946de","timestampMs":1712790901834,"name":"apex-89e6e63c-e6e7-4365-a78f-3939e2d58a8b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:38 kafka | [2024-04-10 23:14:40,716] INFO [LogLoader partition=__consumer_offsets-37, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:38 policy-pap | [2024-04-10T23:15:01.842+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:16:38 kafka | [2024-04-10 23:14:40,717] INFO Created log for partition __consumer_offsets-37 in /var/lib/kafka/data/__consumer_offsets-37 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:38 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"472c3b39-181b-4e95-9577-e8759534697c","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"7f9ad840-c145-4eb1-87ea-e490e44946de","timestampMs":1712790901834,"name":"apex-89e6e63c-e6e7-4365-a78f-3939e2d58a8b","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:38 kafka | [2024-04-10 23:14:40,717] INFO [Partition __consumer_offsets-37 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-37 (kafka.cluster.Partition) 23:16:38 policy-pap | [2024-04-10T23:15:01.843+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 472c3b39-181b-4e95-9577-e8759534697c 23:16:38 kafka | [2024-04-10 23:14:40,717] INFO [Partition __consumer_offsets-37 broker=1] Log loaded for partition __consumer_offsets-37 with initial high watermark 0 (kafka.cluster.Partition) 23:16:38 policy-pap | [2024-04-10T23:15:01.843+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-89e6e63c-e6e7-4365-a78f-3939e2d58a8b PdpUpdate stopping 23:16:38 kafka | [2024-04-10 23:14:40,717] INFO [Broker id=1] Leader __consumer_offsets-37 with topic id Some(d-olkgqFQhOA7vPVx66rWg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:38 policy-pap | [2024-04-10T23:15:01.843+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-89e6e63c-e6e7-4365-a78f-3939e2d58a8b PdpUpdate stopping enqueue 23:16:38 kafka | [2024-04-10 23:14:40,722] INFO [LogLoader partition=__consumer_offsets-7, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:38 policy-pap | [2024-04-10T23:15:01.843+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-89e6e63c-e6e7-4365-a78f-3939e2d58a8b PdpUpdate stopping timer 23:16:38 kafka | [2024-04-10 23:14:40,723] INFO Created log for partition __consumer_offsets-7 in /var/lib/kafka/data/__consumer_offsets-7 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:38 policy-pap | [2024-04-10T23:15:01.843+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=472c3b39-181b-4e95-9577-e8759534697c, expireMs=1712790931821] 23:16:38 kafka | [2024-04-10 23:14:40,723] INFO [Partition __consumer_offsets-7 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-7 (kafka.cluster.Partition) 23:16:38 policy-pap | [2024-04-10T23:15:01.844+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-89e6e63c-e6e7-4365-a78f-3939e2d58a8b PdpUpdate stopping listener 23:16:38 kafka | [2024-04-10 23:14:40,723] INFO [Partition __consumer_offsets-7 broker=1] Log loaded for partition __consumer_offsets-7 with initial high watermark 0 (kafka.cluster.Partition) 23:16:38 policy-pap | [2024-04-10T23:15:01.844+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-89e6e63c-e6e7-4365-a78f-3939e2d58a8b PdpUpdate stopped 23:16:38 kafka | [2024-04-10 23:14:40,723] INFO [Broker id=1] Leader __consumer_offsets-7 with topic id Some(d-olkgqFQhOA7vPVx66rWg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:38 policy-pap | [2024-04-10T23:15:01.848+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-89e6e63c-e6e7-4365-a78f-3939e2d58a8b PdpUpdate successful 23:16:38 kafka | [2024-04-10 23:14:40,730] INFO [LogLoader partition=__consumer_offsets-22, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:38 policy-pap | [2024-04-10T23:15:01.849+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-89e6e63c-e6e7-4365-a78f-3939e2d58a8b has no more requests 23:16:38 kafka | [2024-04-10 23:14:40,730] INFO Created log for partition __consumer_offsets-22 in /var/lib/kafka/data/__consumer_offsets-22 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:38 policy-pap | [2024-04-10T23:15:06.815+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls 23:16:38 kafka | [2024-04-10 23:14:40,730] INFO [Partition __consumer_offsets-22 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-22 (kafka.cluster.Partition) 23:16:38 policy-pap | [2024-04-10T23:15:06.822+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls 23:16:38 kafka | [2024-04-10 23:14:40,730] INFO [Partition __consumer_offsets-22 broker=1] Log loaded for partition __consumer_offsets-22 with initial high watermark 0 (kafka.cluster.Partition) 23:16:38 policy-pap | [2024-04-10T23:15:07.272+00:00|INFO|SessionData|http-nio-6969-exec-7] unknown group testGroup 23:16:38 kafka | [2024-04-10 23:14:40,731] INFO [Broker id=1] Leader __consumer_offsets-22 with topic id Some(d-olkgqFQhOA7vPVx66rWg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:38 policy-pap | [2024-04-10T23:15:07.786+00:00|INFO|SessionData|http-nio-6969-exec-7] create cached group testGroup 23:16:38 kafka | [2024-04-10 23:14:40,737] INFO [LogLoader partition=__consumer_offsets-29, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:38 policy-pap | [2024-04-10T23:15:07.786+00:00|INFO|SessionData|http-nio-6969-exec-7] creating DB group testGroup 23:16:38 kafka | [2024-04-10 23:14:40,738] INFO Created log for partition __consumer_offsets-29 in /var/lib/kafka/data/__consumer_offsets-29 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:38 policy-pap | [2024-04-10T23:15:08.378+00:00|INFO|SessionData|http-nio-6969-exec-1] cache group testGroup 23:16:38 kafka | [2024-04-10 23:14:40,738] INFO [Partition __consumer_offsets-29 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-29 (kafka.cluster.Partition) 23:16:38 policy-pap | [2024-04-10T23:15:08.649+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-1] Registering a deploy for policy onap.restart.tca 1.0.0 23:16:38 kafka | [2024-04-10 23:14:40,738] INFO [Partition __consumer_offsets-29 broker=1] Log loaded for partition __consumer_offsets-29 with initial high watermark 0 (kafka.cluster.Partition) 23:16:38 policy-pap | [2024-04-10T23:15:08.754+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-1] Registering a deploy for policy operational.apex.decisionMaker 1.0.0 23:16:38 kafka | [2024-04-10 23:14:40,738] INFO [Broker id=1] Leader __consumer_offsets-29 with topic id Some(d-olkgqFQhOA7vPVx66rWg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:38 policy-pap | [2024-04-10T23:15:08.754+00:00|INFO|SessionData|http-nio-6969-exec-1] update cached group testGroup 23:16:38 kafka | [2024-04-10 23:14:40,744] INFO [LogLoader partition=__consumer_offsets-44, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:38 policy-pap | [2024-04-10T23:15:08.755+00:00|INFO|SessionData|http-nio-6969-exec-1] updating DB group testGroup 23:16:38 kafka | [2024-04-10 23:14:40,745] INFO Created log for partition __consumer_offsets-44 in /var/lib/kafka/data/__consumer_offsets-44 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:38 policy-pap | [2024-04-10T23:15:08.770+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-1] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeA, policy=onap.restart.tca 1.0.0, action=DEPLOYMENT, timestamp=2024-04-10T23:15:08Z, user=policyadmin), PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeC, policy=operational.apex.decisionMaker 1.0.0, action=DEPLOYMENT, timestamp=2024-04-10T23:15:08Z, user=policyadmin)] 23:16:38 kafka | [2024-04-10 23:14:40,745] INFO [Partition __consumer_offsets-44 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-44 (kafka.cluster.Partition) 23:16:38 policy-pap | [2024-04-10T23:15:09.499+00:00|INFO|SessionData|http-nio-6969-exec-5] cache group testGroup 23:16:38 kafka | [2024-04-10 23:14:40,745] INFO [Partition __consumer_offsets-44 broker=1] Log loaded for partition __consumer_offsets-44 with initial high watermark 0 (kafka.cluster.Partition) 23:16:38 policy-pap | [2024-04-10T23:15:09.500+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-5] remove policy onap.restart.tca 1.0.0 from subgroup testGroup pdpTypeA count=0 23:16:38 kafka | [2024-04-10 23:14:40,745] INFO [Broker id=1] Leader __consumer_offsets-44 with topic id Some(d-olkgqFQhOA7vPVx66rWg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:38 policy-pap | [2024-04-10T23:15:09.500+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-5] Registering an undeploy for policy onap.restart.tca 1.0.0 23:16:38 kafka | [2024-04-10 23:14:40,750] INFO [LogLoader partition=__consumer_offsets-14, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:38 policy-pap | [2024-04-10T23:15:09.500+00:00|INFO|SessionData|http-nio-6969-exec-5] update cached group testGroup 23:16:38 kafka | [2024-04-10 23:14:40,751] INFO Created log for partition __consumer_offsets-14 in /var/lib/kafka/data/__consumer_offsets-14 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:38 policy-pap | [2024-04-10T23:15:09.501+00:00|INFO|SessionData|http-nio-6969-exec-5] updating DB group testGroup 23:16:38 kafka | [2024-04-10 23:14:40,751] INFO [Partition __consumer_offsets-14 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-14 (kafka.cluster.Partition) 23:16:38 policy-pap | [2024-04-10T23:15:09.515+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-5] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeA, policy=onap.restart.tca 1.0.0, action=UNDEPLOYMENT, timestamp=2024-04-10T23:15:09Z, user=policyadmin)] 23:16:38 kafka | [2024-04-10 23:14:40,751] INFO [Partition __consumer_offsets-14 broker=1] Log loaded for partition __consumer_offsets-14 with initial high watermark 0 (kafka.cluster.Partition) 23:16:38 policy-pap | [2024-04-10T23:15:09.917+00:00|INFO|SessionData|http-nio-6969-exec-6] cache group defaultGroup 23:16:38 kafka | [2024-04-10 23:14:40,751] INFO [Broker id=1] Leader __consumer_offsets-14 with topic id Some(d-olkgqFQhOA7vPVx66rWg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:38 policy-pap | [2024-04-10T23:15:09.917+00:00|INFO|SessionData|http-nio-6969-exec-6] cache group testGroup 23:16:38 kafka | [2024-04-10 23:14:40,757] INFO [LogLoader partition=__consumer_offsets-23, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:38 policy-pap | [2024-04-10T23:15:09.917+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-6] remove policy operational.apex.decisionMaker 1.0.0 from subgroup testGroup pdpTypeC count=0 23:16:38 kafka | [2024-04-10 23:14:40,758] INFO Created log for partition __consumer_offsets-23 in /var/lib/kafka/data/__consumer_offsets-23 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:38 policy-pap | [2024-04-10T23:15:09.917+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-6] Registering an undeploy for policy operational.apex.decisionMaker 1.0.0 23:16:38 kafka | [2024-04-10 23:14:40,758] INFO [Partition __consumer_offsets-23 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-23 (kafka.cluster.Partition) 23:16:38 policy-pap | [2024-04-10T23:15:09.917+00:00|INFO|SessionData|http-nio-6969-exec-6] update cached group testGroup 23:16:38 kafka | [2024-04-10 23:14:40,758] INFO [Partition __consumer_offsets-23 broker=1] Log loaded for partition __consumer_offsets-23 with initial high watermark 0 (kafka.cluster.Partition) 23:16:38 policy-pap | [2024-04-10T23:15:09.917+00:00|INFO|SessionData|http-nio-6969-exec-6] updating DB group testGroup 23:16:38 kafka | [2024-04-10 23:14:40,758] INFO [Broker id=1] Leader __consumer_offsets-23 with topic id Some(d-olkgqFQhOA7vPVx66rWg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:38 policy-pap | [2024-04-10T23:15:09.932+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-6] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeC, policy=operational.apex.decisionMaker 1.0.0, action=UNDEPLOYMENT, timestamp=2024-04-10T23:15:09Z, user=policyadmin)] 23:16:38 kafka | [2024-04-10 23:14:40,764] INFO [LogLoader partition=__consumer_offsets-38, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:38 policy-pap | [2024-04-10T23:15:30.514+00:00|INFO|SessionData|http-nio-6969-exec-1] cache group testGroup 23:16:38 kafka | [2024-04-10 23:14:40,764] INFO Created log for partition __consumer_offsets-38 in /var/lib/kafka/data/__consumer_offsets-38 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:38 kafka | [2024-04-10 23:14:40,765] INFO [Partition __consumer_offsets-38 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-38 (kafka.cluster.Partition) 23:16:38 policy-pap | [2024-04-10T23:15:30.516+00:00|INFO|SessionData|http-nio-6969-exec-1] deleting DB group testGroup 23:16:38 policy-pap | [2024-04-10T23:15:31.654+00:00|INFO|TimerManager|Thread-9] update timer discarded (expired) Timer [name=43b7d102-6385-499f-be91-353062d39071, expireMs=1712790931653] 23:16:38 kafka | [2024-04-10 23:14:40,765] INFO [Partition __consumer_offsets-38 broker=1] Log loaded for partition __consumer_offsets-38 with initial high watermark 0 (kafka.cluster.Partition) 23:16:38 policy-pap | [2024-04-10T23:15:31.767+00:00|INFO|TimerManager|Thread-10] state-change timer discarded (expired) Timer [name=be191d72-cf37-4b38-8bd3-f09869418d7b, expireMs=1712790931767] 23:16:38 kafka | [2024-04-10 23:14:40,765] INFO [Broker id=1] Leader __consumer_offsets-38 with topic id Some(d-olkgqFQhOA7vPVx66rWg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:38 kafka | [2024-04-10 23:14:40,772] INFO [LogLoader partition=__consumer_offsets-8, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:38 kafka | [2024-04-10 23:14:40,773] INFO Created log for partition __consumer_offsets-8 in /var/lib/kafka/data/__consumer_offsets-8 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:38 kafka | [2024-04-10 23:14:40,773] INFO [Partition __consumer_offsets-8 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-8 (kafka.cluster.Partition) 23:16:38 kafka | [2024-04-10 23:14:40,773] INFO [Partition __consumer_offsets-8 broker=1] Log loaded for partition __consumer_offsets-8 with initial high watermark 0 (kafka.cluster.Partition) 23:16:38 kafka | [2024-04-10 23:14:40,773] INFO [Broker id=1] Leader __consumer_offsets-8 with topic id Some(d-olkgqFQhOA7vPVx66rWg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:38 kafka | [2024-04-10 23:14:40,782] INFO [LogLoader partition=policy-pdp-pap-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:38 kafka | [2024-04-10 23:14:40,783] INFO Created log for partition policy-pdp-pap-0 in /var/lib/kafka/data/policy-pdp-pap-0 with properties {} (kafka.log.LogManager) 23:16:38 kafka | [2024-04-10 23:14:40,783] INFO [Partition policy-pdp-pap-0 broker=1] No checkpointed highwatermark is found for partition policy-pdp-pap-0 (kafka.cluster.Partition) 23:16:38 kafka | [2024-04-10 23:14:40,783] INFO [Partition policy-pdp-pap-0 broker=1] Log loaded for partition policy-pdp-pap-0 with initial high watermark 0 (kafka.cluster.Partition) 23:16:38 kafka | [2024-04-10 23:14:40,783] INFO [Broker id=1] Leader policy-pdp-pap-0 with topic id Some(qAknXW8rRl-lTJen2kDk1Q) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:38 kafka | [2024-04-10 23:14:40,791] INFO [LogLoader partition=__consumer_offsets-45, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:38 kafka | [2024-04-10 23:14:40,792] INFO Created log for partition __consumer_offsets-45 in /var/lib/kafka/data/__consumer_offsets-45 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:38 kafka | [2024-04-10 23:14:40,792] INFO [Partition __consumer_offsets-45 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-45 (kafka.cluster.Partition) 23:16:38 kafka | [2024-04-10 23:14:40,793] INFO [Partition __consumer_offsets-45 broker=1] Log loaded for partition __consumer_offsets-45 with initial high watermark 0 (kafka.cluster.Partition) 23:16:38 kafka | [2024-04-10 23:14:40,793] INFO [Broker id=1] Leader __consumer_offsets-45 with topic id Some(d-olkgqFQhOA7vPVx66rWg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:38 kafka | [2024-04-10 23:14:40,803] INFO [LogLoader partition=__consumer_offsets-15, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:38 kafka | [2024-04-10 23:14:40,803] INFO Created log for partition __consumer_offsets-15 in /var/lib/kafka/data/__consumer_offsets-15 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:38 kafka | [2024-04-10 23:14:40,804] INFO [Partition __consumer_offsets-15 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-15 (kafka.cluster.Partition) 23:16:38 kafka | [2024-04-10 23:14:40,804] INFO [Partition __consumer_offsets-15 broker=1] Log loaded for partition __consumer_offsets-15 with initial high watermark 0 (kafka.cluster.Partition) 23:16:38 kafka | [2024-04-10 23:14:40,804] INFO [Broker id=1] Leader __consumer_offsets-15 with topic id Some(d-olkgqFQhOA7vPVx66rWg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:38 kafka | [2024-04-10 23:14:40,812] INFO [LogLoader partition=__consumer_offsets-30, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:38 kafka | [2024-04-10 23:14:40,813] INFO Created log for partition __consumer_offsets-30 in /var/lib/kafka/data/__consumer_offsets-30 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:38 kafka | [2024-04-10 23:14:40,814] INFO [Partition __consumer_offsets-30 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-30 (kafka.cluster.Partition) 23:16:38 kafka | [2024-04-10 23:14:40,814] INFO [Partition __consumer_offsets-30 broker=1] Log loaded for partition __consumer_offsets-30 with initial high watermark 0 (kafka.cluster.Partition) 23:16:38 kafka | [2024-04-10 23:14:40,814] INFO [Broker id=1] Leader __consumer_offsets-30 with topic id Some(d-olkgqFQhOA7vPVx66rWg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:38 kafka | [2024-04-10 23:14:40,820] INFO [LogLoader partition=__consumer_offsets-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:38 kafka | [2024-04-10 23:14:40,820] INFO Created log for partition __consumer_offsets-0 in /var/lib/kafka/data/__consumer_offsets-0 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:38 kafka | [2024-04-10 23:14:40,820] INFO [Partition __consumer_offsets-0 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-0 (kafka.cluster.Partition) 23:16:38 kafka | [2024-04-10 23:14:40,821] INFO [Partition __consumer_offsets-0 broker=1] Log loaded for partition __consumer_offsets-0 with initial high watermark 0 (kafka.cluster.Partition) 23:16:38 kafka | [2024-04-10 23:14:40,821] INFO [Broker id=1] Leader __consumer_offsets-0 with topic id Some(d-olkgqFQhOA7vPVx66rWg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:38 kafka | [2024-04-10 23:14:40,827] INFO [LogLoader partition=__consumer_offsets-35, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:38 kafka | [2024-04-10 23:14:40,828] INFO Created log for partition __consumer_offsets-35 in /var/lib/kafka/data/__consumer_offsets-35 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:38 kafka | [2024-04-10 23:14:40,828] INFO [Partition __consumer_offsets-35 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-35 (kafka.cluster.Partition) 23:16:38 kafka | [2024-04-10 23:14:40,828] INFO [Partition __consumer_offsets-35 broker=1] Log loaded for partition __consumer_offsets-35 with initial high watermark 0 (kafka.cluster.Partition) 23:16:38 kafka | [2024-04-10 23:14:40,828] INFO [Broker id=1] Leader __consumer_offsets-35 with topic id Some(d-olkgqFQhOA7vPVx66rWg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:38 kafka | [2024-04-10 23:14:40,836] INFO [LogLoader partition=__consumer_offsets-5, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:38 kafka | [2024-04-10 23:14:40,837] INFO Created log for partition __consumer_offsets-5 in /var/lib/kafka/data/__consumer_offsets-5 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:38 kafka | [2024-04-10 23:14:40,837] INFO [Partition __consumer_offsets-5 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-5 (kafka.cluster.Partition) 23:16:38 kafka | [2024-04-10 23:14:40,837] INFO [Partition __consumer_offsets-5 broker=1] Log loaded for partition __consumer_offsets-5 with initial high watermark 0 (kafka.cluster.Partition) 23:16:38 kafka | [2024-04-10 23:14:40,837] INFO [Broker id=1] Leader __consumer_offsets-5 with topic id Some(d-olkgqFQhOA7vPVx66rWg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:38 kafka | [2024-04-10 23:14:40,844] INFO [LogLoader partition=__consumer_offsets-20, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:38 kafka | [2024-04-10 23:14:40,844] INFO Created log for partition __consumer_offsets-20 in /var/lib/kafka/data/__consumer_offsets-20 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:38 kafka | [2024-04-10 23:14:40,845] INFO [Partition __consumer_offsets-20 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-20 (kafka.cluster.Partition) 23:16:38 kafka | [2024-04-10 23:14:40,845] INFO [Partition __consumer_offsets-20 broker=1] Log loaded for partition __consumer_offsets-20 with initial high watermark 0 (kafka.cluster.Partition) 23:16:38 kafka | [2024-04-10 23:14:40,845] INFO [Broker id=1] Leader __consumer_offsets-20 with topic id Some(d-olkgqFQhOA7vPVx66rWg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:38 kafka | [2024-04-10 23:14:40,852] INFO [LogLoader partition=__consumer_offsets-27, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:38 kafka | [2024-04-10 23:14:40,853] INFO Created log for partition __consumer_offsets-27 in /var/lib/kafka/data/__consumer_offsets-27 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:38 kafka | [2024-04-10 23:14:40,853] INFO [Partition __consumer_offsets-27 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-27 (kafka.cluster.Partition) 23:16:38 kafka | [2024-04-10 23:14:40,853] INFO [Partition __consumer_offsets-27 broker=1] Log loaded for partition __consumer_offsets-27 with initial high watermark 0 (kafka.cluster.Partition) 23:16:38 kafka | [2024-04-10 23:14:40,853] INFO [Broker id=1] Leader __consumer_offsets-27 with topic id Some(d-olkgqFQhOA7vPVx66rWg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:38 kafka | [2024-04-10 23:14:40,859] INFO [LogLoader partition=__consumer_offsets-42, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:38 kafka | [2024-04-10 23:14:40,860] INFO Created log for partition __consumer_offsets-42 in /var/lib/kafka/data/__consumer_offsets-42 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:38 kafka | [2024-04-10 23:14:40,860] INFO [Partition __consumer_offsets-42 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-42 (kafka.cluster.Partition) 23:16:38 kafka | [2024-04-10 23:14:40,860] INFO [Partition __consumer_offsets-42 broker=1] Log loaded for partition __consumer_offsets-42 with initial high watermark 0 (kafka.cluster.Partition) 23:16:38 kafka | [2024-04-10 23:14:40,861] INFO [Broker id=1] Leader __consumer_offsets-42 with topic id Some(d-olkgqFQhOA7vPVx66rWg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:38 kafka | [2024-04-10 23:14:40,867] INFO [LogLoader partition=__consumer_offsets-12, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:38 kafka | [2024-04-10 23:14:40,867] INFO Created log for partition __consumer_offsets-12 in /var/lib/kafka/data/__consumer_offsets-12 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:38 kafka | [2024-04-10 23:14:40,868] INFO [Partition __consumer_offsets-12 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-12 (kafka.cluster.Partition) 23:16:38 kafka | [2024-04-10 23:14:40,868] INFO [Partition __consumer_offsets-12 broker=1] Log loaded for partition __consumer_offsets-12 with initial high watermark 0 (kafka.cluster.Partition) 23:16:38 kafka | [2024-04-10 23:14:40,868] INFO [Broker id=1] Leader __consumer_offsets-12 with topic id Some(d-olkgqFQhOA7vPVx66rWg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:38 kafka | [2024-04-10 23:14:40,875] INFO [LogLoader partition=__consumer_offsets-21, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:38 kafka | [2024-04-10 23:14:40,875] INFO Created log for partition __consumer_offsets-21 in /var/lib/kafka/data/__consumer_offsets-21 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:38 kafka | [2024-04-10 23:14:40,876] INFO [Partition __consumer_offsets-21 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-21 (kafka.cluster.Partition) 23:16:38 kafka | [2024-04-10 23:14:40,876] INFO [Partition __consumer_offsets-21 broker=1] Log loaded for partition __consumer_offsets-21 with initial high watermark 0 (kafka.cluster.Partition) 23:16:38 kafka | [2024-04-10 23:14:40,876] INFO [Broker id=1] Leader __consumer_offsets-21 with topic id Some(d-olkgqFQhOA7vPVx66rWg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:38 kafka | [2024-04-10 23:14:40,884] INFO [LogLoader partition=__consumer_offsets-36, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:38 kafka | [2024-04-10 23:14:40,885] INFO Created log for partition __consumer_offsets-36 in /var/lib/kafka/data/__consumer_offsets-36 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:38 kafka | [2024-04-10 23:14:40,885] INFO [Partition __consumer_offsets-36 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-36 (kafka.cluster.Partition) 23:16:38 kafka | [2024-04-10 23:14:40,885] INFO [Partition __consumer_offsets-36 broker=1] Log loaded for partition __consumer_offsets-36 with initial high watermark 0 (kafka.cluster.Partition) 23:16:38 kafka | [2024-04-10 23:14:40,886] INFO [Broker id=1] Leader __consumer_offsets-36 with topic id Some(d-olkgqFQhOA7vPVx66rWg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:38 kafka | [2024-04-10 23:14:40,894] INFO [LogLoader partition=__consumer_offsets-6, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:38 kafka | [2024-04-10 23:14:40,895] INFO Created log for partition __consumer_offsets-6 in /var/lib/kafka/data/__consumer_offsets-6 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:38 kafka | [2024-04-10 23:14:40,895] INFO [Partition __consumer_offsets-6 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-6 (kafka.cluster.Partition) 23:16:38 kafka | [2024-04-10 23:14:40,895] INFO [Partition __consumer_offsets-6 broker=1] Log loaded for partition __consumer_offsets-6 with initial high watermark 0 (kafka.cluster.Partition) 23:16:38 kafka | [2024-04-10 23:14:40,896] INFO [Broker id=1] Leader __consumer_offsets-6 with topic id Some(d-olkgqFQhOA7vPVx66rWg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:38 kafka | [2024-04-10 23:14:40,904] INFO [LogLoader partition=__consumer_offsets-43, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:38 kafka | [2024-04-10 23:14:40,905] INFO Created log for partition __consumer_offsets-43 in /var/lib/kafka/data/__consumer_offsets-43 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:38 kafka | [2024-04-10 23:14:40,905] INFO [Partition __consumer_offsets-43 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-43 (kafka.cluster.Partition) 23:16:38 kafka | [2024-04-10 23:14:40,905] INFO [Partition __consumer_offsets-43 broker=1] Log loaded for partition __consumer_offsets-43 with initial high watermark 0 (kafka.cluster.Partition) 23:16:38 kafka | [2024-04-10 23:14:40,905] INFO [Broker id=1] Leader __consumer_offsets-43 with topic id Some(d-olkgqFQhOA7vPVx66rWg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:38 kafka | [2024-04-10 23:14:40,915] INFO [LogLoader partition=__consumer_offsets-13, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:38 kafka | [2024-04-10 23:14:40,916] INFO Created log for partition __consumer_offsets-13 in /var/lib/kafka/data/__consumer_offsets-13 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:38 kafka | [2024-04-10 23:14:40,916] INFO [Partition __consumer_offsets-13 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-13 (kafka.cluster.Partition) 23:16:38 kafka | [2024-04-10 23:14:40,916] INFO [Partition __consumer_offsets-13 broker=1] Log loaded for partition __consumer_offsets-13 with initial high watermark 0 (kafka.cluster.Partition) 23:16:38 kafka | [2024-04-10 23:14:40,917] INFO [Broker id=1] Leader __consumer_offsets-13 with topic id Some(d-olkgqFQhOA7vPVx66rWg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:38 kafka | [2024-04-10 23:14:40,923] INFO [LogLoader partition=__consumer_offsets-28, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:38 kafka | [2024-04-10 23:14:40,924] INFO Created log for partition __consumer_offsets-28 in /var/lib/kafka/data/__consumer_offsets-28 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:38 kafka | [2024-04-10 23:14:40,924] INFO [Partition __consumer_offsets-28 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-28 (kafka.cluster.Partition) 23:16:38 kafka | [2024-04-10 23:14:40,924] INFO [Partition __consumer_offsets-28 broker=1] Log loaded for partition __consumer_offsets-28 with initial high watermark 0 (kafka.cluster.Partition) 23:16:38 kafka | [2024-04-10 23:14:40,924] INFO [Broker id=1] Leader __consumer_offsets-28 with topic id Some(d-olkgqFQhOA7vPVx66rWg) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:38 kafka | [2024-04-10 23:14:40,930] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-3 (state.change.logger) 23:16:38 kafka | [2024-04-10 23:14:40,930] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-18 (state.change.logger) 23:16:38 kafka | [2024-04-10 23:14:40,930] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-41 (state.change.logger) 23:16:38 kafka | [2024-04-10 23:14:40,930] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-10 (state.change.logger) 23:16:38 kafka | [2024-04-10 23:14:40,930] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-33 (state.change.logger) 23:16:38 kafka | [2024-04-10 23:14:40,930] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-48 (state.change.logger) 23:16:38 kafka | [2024-04-10 23:14:40,930] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-19 (state.change.logger) 23:16:38 kafka | [2024-04-10 23:14:40,930] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-34 (state.change.logger) 23:16:38 kafka | [2024-04-10 23:14:40,930] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-4 (state.change.logger) 23:16:38 kafka | [2024-04-10 23:14:40,930] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-11 (state.change.logger) 23:16:38 kafka | [2024-04-10 23:14:40,930] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-26 (state.change.logger) 23:16:38 kafka | [2024-04-10 23:14:40,930] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-49 (state.change.logger) 23:16:38 kafka | [2024-04-10 23:14:40,930] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-39 (state.change.logger) 23:16:38 kafka | [2024-04-10 23:14:40,931] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-9 (state.change.logger) 23:16:38 kafka | [2024-04-10 23:14:40,931] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-24 (state.change.logger) 23:16:38 kafka | [2024-04-10 23:14:40,931] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-31 (state.change.logger) 23:16:38 kafka | [2024-04-10 23:14:40,931] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-46 (state.change.logger) 23:16:38 kafka | [2024-04-10 23:14:40,931] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-1 (state.change.logger) 23:16:38 kafka | [2024-04-10 23:14:40,931] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-16 (state.change.logger) 23:16:38 kafka | [2024-04-10 23:14:40,931] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-2 (state.change.logger) 23:16:38 kafka | [2024-04-10 23:14:40,931] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-25 (state.change.logger) 23:16:38 kafka | [2024-04-10 23:14:40,931] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-40 (state.change.logger) 23:16:38 kafka | [2024-04-10 23:14:40,931] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-47 (state.change.logger) 23:16:38 kafka | [2024-04-10 23:14:40,931] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-17 (state.change.logger) 23:16:38 kafka | [2024-04-10 23:14:40,931] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-32 (state.change.logger) 23:16:38 kafka | [2024-04-10 23:14:40,931] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-37 (state.change.logger) 23:16:38 kafka | [2024-04-10 23:14:40,931] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-7 (state.change.logger) 23:16:38 kafka | [2024-04-10 23:14:40,931] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-22 (state.change.logger) 23:16:38 kafka | [2024-04-10 23:14:40,931] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-29 (state.change.logger) 23:16:38 kafka | [2024-04-10 23:14:40,931] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-44 (state.change.logger) 23:16:38 kafka | [2024-04-10 23:14:40,931] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-14 (state.change.logger) 23:16:38 kafka | [2024-04-10 23:14:40,931] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-23 (state.change.logger) 23:16:38 kafka | [2024-04-10 23:14:40,931] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-38 (state.change.logger) 23:16:38 kafka | [2024-04-10 23:14:40,931] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-8 (state.change.logger) 23:16:38 kafka | [2024-04-10 23:14:40,931] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) 23:16:38 kafka | [2024-04-10 23:14:40,931] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-45 (state.change.logger) 23:16:38 kafka | [2024-04-10 23:14:40,931] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-15 (state.change.logger) 23:16:38 kafka | [2024-04-10 23:14:40,931] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-30 (state.change.logger) 23:16:38 kafka | [2024-04-10 23:14:40,931] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-0 (state.change.logger) 23:16:38 kafka | [2024-04-10 23:14:40,932] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-35 (state.change.logger) 23:16:38 kafka | [2024-04-10 23:14:40,932] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-5 (state.change.logger) 23:16:38 kafka | [2024-04-10 23:14:40,932] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-20 (state.change.logger) 23:16:38 kafka | [2024-04-10 23:14:40,932] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-27 (state.change.logger) 23:16:38 kafka | [2024-04-10 23:14:40,932] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-42 (state.change.logger) 23:16:38 kafka | [2024-04-10 23:14:40,932] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-12 (state.change.logger) 23:16:38 kafka | [2024-04-10 23:14:40,932] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-21 (state.change.logger) 23:16:38 kafka | [2024-04-10 23:14:40,932] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-36 (state.change.logger) 23:16:38 kafka | [2024-04-10 23:14:40,932] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-6 (state.change.logger) 23:16:38 kafka | [2024-04-10 23:14:40,932] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-43 (state.change.logger) 23:16:38 kafka | [2024-04-10 23:14:40,932] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-13 (state.change.logger) 23:16:38 kafka | [2024-04-10 23:14:40,932] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-28 (state.change.logger) 23:16:38 kafka | [2024-04-10 23:14:40,938] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 3 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:38 kafka | [2024-04-10 23:14:40,940] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-3 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:38 kafka | [2024-04-10 23:14:40,942] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 18 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:38 kafka | [2024-04-10 23:14:40,942] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-18 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:38 kafka | [2024-04-10 23:14:40,942] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 41 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:38 kafka | [2024-04-10 23:14:40,942] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-41 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:38 kafka | [2024-04-10 23:14:40,942] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 10 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:38 kafka | [2024-04-10 23:14:40,942] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-10 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:38 kafka | [2024-04-10 23:14:40,942] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 33 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:38 kafka | [2024-04-10 23:14:40,942] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-33 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:38 kafka | [2024-04-10 23:14:40,942] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 48 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:38 kafka | [2024-04-10 23:14:40,942] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-48 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:38 kafka | [2024-04-10 23:14:40,942] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 19 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:38 kafka | [2024-04-10 23:14:40,942] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-19 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:38 kafka | [2024-04-10 23:14:40,942] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 34 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:38 kafka | [2024-04-10 23:14:40,942] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-34 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:38 kafka | [2024-04-10 23:14:40,943] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 4 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:38 kafka | [2024-04-10 23:14:40,943] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-4 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:38 kafka | [2024-04-10 23:14:40,943] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 11 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:38 kafka | [2024-04-10 23:14:40,943] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-11 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:38 kafka | [2024-04-10 23:14:40,943] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 26 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:38 kafka | [2024-04-10 23:14:40,943] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-26 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:38 kafka | [2024-04-10 23:14:40,943] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 49 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:38 kafka | [2024-04-10 23:14:40,943] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-49 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:38 kafka | [2024-04-10 23:14:40,943] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 39 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:38 kafka | [2024-04-10 23:14:40,943] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-39 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:38 kafka | [2024-04-10 23:14:40,943] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 9 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:38 kafka | [2024-04-10 23:14:40,943] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-9 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:38 kafka | [2024-04-10 23:14:40,943] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 24 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:38 kafka | [2024-04-10 23:14:40,943] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-24 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:38 kafka | [2024-04-10 23:14:40,943] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 31 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:38 kafka | [2024-04-10 23:14:40,943] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-31 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:38 kafka | [2024-04-10 23:14:40,943] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 46 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:38 kafka | [2024-04-10 23:14:40,943] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-46 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:38 kafka | [2024-04-10 23:14:40,943] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 1 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:38 kafka | [2024-04-10 23:14:40,944] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-1 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:38 kafka | [2024-04-10 23:14:40,944] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 16 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:38 kafka | [2024-04-10 23:14:40,944] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-16 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:38 kafka | [2024-04-10 23:14:40,944] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 2 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:38 kafka | [2024-04-10 23:14:40,944] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-2 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:38 kafka | [2024-04-10 23:14:40,944] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 25 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:38 kafka | [2024-04-10 23:14:40,944] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-25 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:38 kafka | [2024-04-10 23:14:40,944] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 40 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:38 kafka | [2024-04-10 23:14:40,944] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-40 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:38 kafka | [2024-04-10 23:14:40,944] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 47 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:38 kafka | [2024-04-10 23:14:40,944] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-47 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:38 kafka | [2024-04-10 23:14:40,944] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 17 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:38 kafka | [2024-04-10 23:14:40,944] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-17 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:38 kafka | [2024-04-10 23:14:40,944] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 32 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:38 kafka | [2024-04-10 23:14:40,944] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-32 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:38 kafka | [2024-04-10 23:14:40,944] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 37 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:38 kafka | [2024-04-10 23:14:40,944] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-37 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:38 kafka | [2024-04-10 23:14:40,944] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 7 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:38 kafka | [2024-04-10 23:14:40,944] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-7 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:38 kafka | [2024-04-10 23:14:40,944] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 22 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:38 kafka | [2024-04-10 23:14:40,945] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-22 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:38 kafka | [2024-04-10 23:14:40,945] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 29 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:38 kafka | [2024-04-10 23:14:40,945] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-29 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:38 kafka | [2024-04-10 23:14:40,945] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 44 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:38 kafka | [2024-04-10 23:14:40,945] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-44 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:38 kafka | [2024-04-10 23:14:40,945] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 14 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:38 kafka | [2024-04-10 23:14:40,945] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-14 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:38 kafka | [2024-04-10 23:14:40,945] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 23 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:38 kafka | [2024-04-10 23:14:40,945] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-23 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:38 kafka | [2024-04-10 23:14:40,945] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 38 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:38 kafka | [2024-04-10 23:14:40,945] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-38 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:38 kafka | [2024-04-10 23:14:40,945] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 8 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:38 kafka | [2024-04-10 23:14:40,945] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-8 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:38 kafka | [2024-04-10 23:14:40,945] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 45 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:38 kafka | [2024-04-10 23:14:40,945] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-45 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:38 kafka | [2024-04-10 23:14:40,945] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 15 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:38 kafka | [2024-04-10 23:14:40,945] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-15 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:38 kafka | [2024-04-10 23:14:40,945] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 30 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:38 kafka | [2024-04-10 23:14:40,945] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-30 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:38 kafka | [2024-04-10 23:14:40,946] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 0 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:38 kafka | [2024-04-10 23:14:40,946] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-0 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:38 kafka | [2024-04-10 23:14:40,946] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 35 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:38 kafka | [2024-04-10 23:14:40,946] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-35 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:38 kafka | [2024-04-10 23:14:40,946] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 5 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:38 kafka | [2024-04-10 23:14:40,946] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-5 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:38 kafka | [2024-04-10 23:14:40,946] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 20 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:38 kafka | [2024-04-10 23:14:40,946] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-20 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:38 kafka | [2024-04-10 23:14:40,946] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 27 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:38 kafka | [2024-04-10 23:14:40,946] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-27 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:38 kafka | [2024-04-10 23:14:40,946] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 42 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:38 kafka | [2024-04-10 23:14:40,946] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-42 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:38 kafka | [2024-04-10 23:14:40,946] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 12 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:38 kafka | [2024-04-10 23:14:40,946] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-12 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:38 kafka | [2024-04-10 23:14:40,946] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 21 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:38 kafka | [2024-04-10 23:14:40,946] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-21 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:38 kafka | [2024-04-10 23:14:40,946] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 36 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:38 kafka | [2024-04-10 23:14:40,947] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-36 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:38 kafka | [2024-04-10 23:14:40,947] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 6 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:38 kafka | [2024-04-10 23:14:40,947] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-6 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:38 kafka | [2024-04-10 23:14:40,947] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 43 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:38 kafka | [2024-04-10 23:14:40,947] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-43 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:38 kafka | [2024-04-10 23:14:40,947] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 13 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:38 kafka | [2024-04-10 23:14:40,947] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-13 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:38 kafka | [2024-04-10 23:14:40,947] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 28 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:38 kafka | [2024-04-10 23:14:40,947] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-28 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:38 kafka | [2024-04-10 23:14:40,950] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-3 in 8 milliseconds for epoch 0, of which 3 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:38 kafka | [2024-04-10 23:14:40,951] INFO [Broker id=1] Finished LeaderAndIsr request in 696ms correlationId 1 from controller 1 for 51 partitions (state.change.logger) 23:16:38 kafka | [2024-04-10 23:14:40,952] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-18 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:38 kafka | [2024-04-10 23:14:40,952] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-41 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:38 kafka | [2024-04-10 23:14:40,952] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-10 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:38 kafka | [2024-04-10 23:14:40,952] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-33 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:38 kafka | [2024-04-10 23:14:40,952] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-48 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:38 kafka | [2024-04-10 23:14:40,953] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-19 in 11 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:38 kafka | [2024-04-10 23:14:40,953] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-34 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:38 kafka | [2024-04-10 23:14:40,953] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-4 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:38 kafka | [2024-04-10 23:14:40,953] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-11 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:38 kafka | [2024-04-10 23:14:40,953] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-26 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:38 kafka | [2024-04-10 23:14:40,954] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-49 in 11 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:38 kafka | [2024-04-10 23:14:40,954] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-39 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:38 kafka | [2024-04-10 23:14:40,954] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-9 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:38 kafka | [2024-04-10 23:14:40,954] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-24 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:38 kafka | [2024-04-10 23:14:40,954] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-31 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:38 kafka | [2024-04-10 23:14:40,954] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-46 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:38 kafka | [2024-04-10 23:14:40,955] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-1 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:38 kafka | [2024-04-10 23:14:40,955] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-16 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:38 kafka | [2024-04-10 23:14:40,955] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-2 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:38 kafka | [2024-04-10 23:14:40,955] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-25 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:38 kafka | [2024-04-10 23:14:40,955] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-40 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:38 kafka | [2024-04-10 23:14:40,956] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-47 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:38 kafka | [2024-04-10 23:14:40,956] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-17 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:38 kafka | [2024-04-10 23:14:40,956] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-32 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:38 kafka | [2024-04-10 23:14:40,956] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-37 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:38 kafka | [2024-04-10 23:14:40,957] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-7 in 13 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:38 kafka | [2024-04-10 23:14:40,957] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-22 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:38 kafka | [2024-04-10 23:14:40,957] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-29 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:38 kafka | [2024-04-10 23:14:40,957] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-44 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:38 kafka | [2024-04-10 23:14:40,957] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-14 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:38 kafka | [2024-04-10 23:14:40,958] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-23 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:38 kafka | [2024-04-10 23:14:40,958] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-38 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:38 kafka | [2024-04-10 23:14:40,958] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-8 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:38 kafka | [2024-04-10 23:14:40,958] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=d-olkgqFQhOA7vPVx66rWg, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=13, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=46, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=9, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=42, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=21, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=17, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=30, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=26, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=5, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=38, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=1, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=34, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=16, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=45, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=12, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=41, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=24, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=20, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=49, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=29, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=25, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=8, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=37, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=4, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=33, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=15, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=48, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=11, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=44, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=23, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=19, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=32, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=28, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=7, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=40, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=3, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=36, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=47, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=14, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=43, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=10, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=22, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=18, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=31, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=27, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=39, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=6, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=35, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=2, errorCode=0)]), LeaderAndIsrTopicError(topicId=qAknXW8rRl-lTJen2kDk1Q, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 1 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 23:16:38 kafka | [2024-04-10 23:14:40,958] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-45 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:38 kafka | [2024-04-10 23:14:40,958] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-15 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:38 kafka | [2024-04-10 23:14:40,959] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-30 in 14 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:38 kafka | [2024-04-10 23:14:40,959] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-0 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:38 kafka | [2024-04-10 23:14:40,959] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-35 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:38 kafka | [2024-04-10 23:14:40,959] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-5 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:38 kafka | [2024-04-10 23:14:40,959] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-20 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:38 kafka | [2024-04-10 23:14:40,959] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-27 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:38 kafka | [2024-04-10 23:14:40,960] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-42 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:38 kafka | [2024-04-10 23:14:40,960] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-12 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:38 kafka | [2024-04-10 23:14:40,960] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-21 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:38 kafka | [2024-04-10 23:14:40,960] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-36 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:38 kafka | [2024-04-10 23:14:40,960] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-6 in 13 milliseconds for epoch 0, of which 13 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:38 kafka | [2024-04-10 23:14:40,961] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-43 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:38 kafka | [2024-04-10 23:14:40,961] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-13 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:38 kafka | [2024-04-10 23:14:40,961] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-28 in 14 milliseconds for epoch 0, of which 14 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:38 kafka | [2024-04-10 23:14:40,969] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition policy-pdp-pap-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:38 kafka | [2024-04-10 23:14:40,969] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-13 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:38 kafka | [2024-04-10 23:14:40,969] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-46 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:38 kafka | [2024-04-10 23:14:40,969] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-9 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:38 kafka | [2024-04-10 23:14:40,970] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-42 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:38 kafka | [2024-04-10 23:14:40,970] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-21 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:38 kafka | [2024-04-10 23:14:40,970] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-17 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:38 kafka | [2024-04-10 23:14:40,970] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-30 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:38 kafka | [2024-04-10 23:14:40,970] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-26 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:38 kafka | [2024-04-10 23:14:40,970] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-5 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:38 kafka | [2024-04-10 23:14:40,970] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-38 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:38 kafka | [2024-04-10 23:14:40,970] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-1 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:38 kafka | [2024-04-10 23:14:40,970] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-34 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:38 kafka | [2024-04-10 23:14:40,970] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-16 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:38 kafka | [2024-04-10 23:14:40,970] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-45 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:38 kafka | [2024-04-10 23:14:40,970] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-12 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:38 kafka | [2024-04-10 23:14:40,970] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-41 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:38 kafka | [2024-04-10 23:14:40,971] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-24 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:38 kafka | [2024-04-10 23:14:40,971] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-20 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:38 kafka | [2024-04-10 23:14:40,971] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-49 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:38 kafka | [2024-04-10 23:14:40,971] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:38 kafka | [2024-04-10 23:14:40,971] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-29 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:38 kafka | [2024-04-10 23:14:40,971] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-25 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:38 kafka | [2024-04-10 23:14:40,971] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-8 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:38 kafka | [2024-04-10 23:14:40,971] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-37 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:38 kafka | [2024-04-10 23:14:40,971] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-4 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:38 kafka | [2024-04-10 23:14:40,971] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-33 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:38 kafka | [2024-04-10 23:14:40,971] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-15 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:38 kafka | [2024-04-10 23:14:40,971] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-48 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:38 kafka | [2024-04-10 23:14:40,971] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-11 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:38 kafka | [2024-04-10 23:14:40,972] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-44 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:38 kafka | [2024-04-10 23:14:40,972] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-23 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:38 kafka | [2024-04-10 23:14:40,972] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-19 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:38 kafka | [2024-04-10 23:14:40,972] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-32 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:38 kafka | [2024-04-10 23:14:40,972] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-28 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:38 kafka | [2024-04-10 23:14:40,972] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-7 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:38 kafka | [2024-04-10 23:14:40,972] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-40 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:38 kafka | [2024-04-10 23:14:40,972] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-3 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:38 kafka | [2024-04-10 23:14:40,972] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-36 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:38 kafka | [2024-04-10 23:14:40,972] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-47 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:38 kafka | [2024-04-10 23:14:40,972] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-14 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:38 kafka | [2024-04-10 23:14:40,972] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-43 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:38 kafka | [2024-04-10 23:14:40,972] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-10 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:38 kafka | [2024-04-10 23:14:40,973] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-22 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:38 kafka | [2024-04-10 23:14:40,973] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-18 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:38 kafka | [2024-04-10 23:14:40,973] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-31 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:38 kafka | [2024-04-10 23:14:40,973] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-27 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:38 kafka | [2024-04-10 23:14:40,973] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-39 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:38 kafka | [2024-04-10 23:14:40,973] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-6 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:38 kafka | [2024-04-10 23:14:40,973] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-35 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:38 kafka | [2024-04-10 23:14:40,973] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-2 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:38 kafka | [2024-04-10 23:14:40,974] INFO [Broker id=1] Add 51 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:38 kafka | [2024-04-10 23:14:40,975] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 2 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 23:16:38 kafka | [2024-04-10 23:14:41,038] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group 9f4a6b38-834c-48e5-bf2a-977246f9eaf0 in Empty state. Created a new member id consumer-9f4a6b38-834c-48e5-bf2a-977246f9eaf0-3-2fe3e0fc-c777-439d-89c2-ab7fb6462276 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 23:16:38 kafka | [2024-04-10 23:14:41,042] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group policy-pap in Empty state. Created a new member id consumer-policy-pap-4-8421d98a-0932-4b9e-b25e-c675b33858f7 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 23:16:38 kafka | [2024-04-10 23:14:41,066] INFO [GroupCoordinator 1]: Preparing to rebalance group 9f4a6b38-834c-48e5-bf2a-977246f9eaf0 in state PreparingRebalance with old generation 0 (__consumer_offsets-8) (reason: Adding new member consumer-9f4a6b38-834c-48e5-bf2a-977246f9eaf0-3-2fe3e0fc-c777-439d-89c2-ab7fb6462276 with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) 23:16:38 kafka | [2024-04-10 23:14:41,066] INFO [GroupCoordinator 1]: Preparing to rebalance group policy-pap in state PreparingRebalance with old generation 0 (__consumer_offsets-24) (reason: Adding new member consumer-policy-pap-4-8421d98a-0932-4b9e-b25e-c675b33858f7 with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) 23:16:38 kafka | [2024-04-10 23:14:41,973] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group 8c9f1915-d141-4575-8b29-0255c152ac0a in Empty state. Created a new member id consumer-8c9f1915-d141-4575-8b29-0255c152ac0a-2-dee66f2c-2405-421f-9cf7-51cebf354ae9 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 23:16:38 kafka | [2024-04-10 23:14:41,983] INFO [GroupCoordinator 1]: Preparing to rebalance group 8c9f1915-d141-4575-8b29-0255c152ac0a in state PreparingRebalance with old generation 0 (__consumer_offsets-16) (reason: Adding new member consumer-8c9f1915-d141-4575-8b29-0255c152ac0a-2-dee66f2c-2405-421f-9cf7-51cebf354ae9 with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) 23:16:38 kafka | [2024-04-10 23:14:44,082] INFO [GroupCoordinator 1]: Stabilized group 9f4a6b38-834c-48e5-bf2a-977246f9eaf0 generation 1 (__consumer_offsets-8) with 1 members (kafka.coordinator.group.GroupCoordinator) 23:16:38 kafka | [2024-04-10 23:14:44,087] INFO [GroupCoordinator 1]: Stabilized group policy-pap generation 1 (__consumer_offsets-24) with 1 members (kafka.coordinator.group.GroupCoordinator) 23:16:38 kafka | [2024-04-10 23:14:44,111] INFO [GroupCoordinator 1]: Assignment received from leader consumer-9f4a6b38-834c-48e5-bf2a-977246f9eaf0-3-2fe3e0fc-c777-439d-89c2-ab7fb6462276 for group 9f4a6b38-834c-48e5-bf2a-977246f9eaf0 for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 23:16:38 kafka | [2024-04-10 23:14:44,113] INFO [GroupCoordinator 1]: Assignment received from leader consumer-policy-pap-4-8421d98a-0932-4b9e-b25e-c675b33858f7 for group policy-pap for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 23:16:38 kafka | [2024-04-10 23:14:44,985] INFO [GroupCoordinator 1]: Stabilized group 8c9f1915-d141-4575-8b29-0255c152ac0a generation 1 (__consumer_offsets-16) with 1 members (kafka.coordinator.group.GroupCoordinator) 23:16:38 kafka | [2024-04-10 23:14:45,004] INFO [GroupCoordinator 1]: Assignment received from leader consumer-8c9f1915-d141-4575-8b29-0255c152ac0a-2-dee66f2c-2405-421f-9cf7-51cebf354ae9 for group 8c9f1915-d141-4575-8b29-0255c152ac0a for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 23:16:38 ++ echo 'Tearing down containers...' 23:16:38 Tearing down containers... 23:16:38 ++ docker-compose down -v --remove-orphans 23:16:38 Stopping grafana ... 23:16:38 Stopping policy-apex-pdp ... 23:16:38 Stopping policy-pap ... 23:16:38 Stopping policy-api ... 23:16:38 Stopping kafka ... 23:16:38 Stopping compose_zookeeper_1 ... 23:16:38 Stopping mariadb ... 23:16:38 Stopping simulator ... 23:16:38 Stopping prometheus ... 23:16:39 Stopping grafana ... done 23:16:39 Stopping prometheus ... done 23:16:48 Stopping policy-apex-pdp ... done 23:16:59 Stopping policy-pap ... done 23:16:59 Stopping simulator ... done 23:17:00 Stopping mariadb ... done 23:17:00 Stopping kafka ... done 23:17:01 Stopping compose_zookeeper_1 ... done 23:17:09 Stopping policy-api ... done 23:17:09 Removing grafana ... 23:17:09 Removing policy-apex-pdp ... 23:17:09 Removing policy-pap ... 23:17:09 Removing policy-api ... 23:17:09 Removing policy-db-migrator ... 23:17:09 Removing kafka ... 23:17:09 Removing compose_zookeeper_1 ... 23:17:09 Removing mariadb ... 23:17:09 Removing simulator ... 23:17:09 Removing prometheus ... 23:17:09 Removing simulator ... done 23:17:09 Removing policy-api ... done 23:17:09 Removing policy-apex-pdp ... done 23:17:09 Removing kafka ... done 23:17:09 Removing policy-pap ... done 23:17:09 Removing policy-db-migrator ... done 23:17:09 Removing grafana ... done 23:17:09 Removing compose_zookeeper_1 ... done 23:17:09 Removing mariadb ... done 23:17:09 Removing prometheus ... done 23:17:09 Removing network compose_default 23:17:10 ++ cd /w/workspace/policy-pap-master-project-csit-pap 23:17:10 + load_set 23:17:10 + _setopts=hxB 23:17:10 ++ echo braceexpand:hashall:interactive-comments:xtrace 23:17:10 ++ tr : ' ' 23:17:10 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:17:10 + set +o braceexpand 23:17:10 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:17:10 + set +o hashall 23:17:10 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:17:10 + set +o interactive-comments 23:17:10 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:17:10 + set +o xtrace 23:17:10 ++ echo hxB 23:17:10 ++ sed 's/./& /g' 23:17:10 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:17:10 + set +h 23:17:10 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:17:10 + set +x 23:17:10 + [[ -n /tmp/tmp.3MdzHR9SiW ]] 23:17:10 + rsync -av /tmp/tmp.3MdzHR9SiW/ /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap 23:17:10 sending incremental file list 23:17:10 ./ 23:17:10 log.html 23:17:10 output.xml 23:17:10 report.html 23:17:10 testplan.txt 23:17:10 23:17:10 sent 919,353 bytes received 95 bytes 1,838,896.00 bytes/sec 23:17:10 total size is 918,811 speedup is 1.00 23:17:10 + rm -rf /w/workspace/policy-pap-master-project-csit-pap/models 23:17:10 + exit 0 23:17:10 $ ssh-agent -k 23:17:10 unset SSH_AUTH_SOCK; 23:17:10 unset SSH_AGENT_PID; 23:17:10 echo Agent pid 2123 killed; 23:17:10 [ssh-agent] Stopped. 23:17:10 Robot results publisher started... 23:17:10 INFO: Checking test criticality is deprecated and will be dropped in a future release! 23:17:10 -Parsing output xml: 23:17:10 Done! 23:17:10 WARNING! Could not find file: **/log.html 23:17:10 WARNING! Could not find file: **/report.html 23:17:10 -Copying log files to build dir: 23:17:10 Done! 23:17:10 -Assigning results to build: 23:17:10 Done! 23:17:10 -Checking thresholds: 23:17:10 Done! 23:17:10 Done publishing Robot results. 23:17:10 [PostBuildScript] - [INFO] Executing post build scripts. 23:17:10 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins10679016605326875250.sh 23:17:10 ---> sysstat.sh 23:17:11 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins17393209698797311571.sh 23:17:11 ---> package-listing.sh 23:17:11 ++ facter osfamily 23:17:11 ++ tr '[:upper:]' '[:lower:]' 23:17:11 + OS_FAMILY=debian 23:17:11 + workspace=/w/workspace/policy-pap-master-project-csit-pap 23:17:11 + START_PACKAGES=/tmp/packages_start.txt 23:17:11 + END_PACKAGES=/tmp/packages_end.txt 23:17:11 + DIFF_PACKAGES=/tmp/packages_diff.txt 23:17:11 + PACKAGES=/tmp/packages_start.txt 23:17:11 + '[' /w/workspace/policy-pap-master-project-csit-pap ']' 23:17:11 + PACKAGES=/tmp/packages_end.txt 23:17:11 + case "${OS_FAMILY}" in 23:17:11 + dpkg -l 23:17:11 + grep '^ii' 23:17:11 + '[' -f /tmp/packages_start.txt ']' 23:17:11 + '[' -f /tmp/packages_end.txt ']' 23:17:11 + diff /tmp/packages_start.txt /tmp/packages_end.txt 23:17:11 + '[' /w/workspace/policy-pap-master-project-csit-pap ']' 23:17:11 + mkdir -p /w/workspace/policy-pap-master-project-csit-pap/archives/ 23:17:11 + cp -f /tmp/packages_diff.txt /tmp/packages_end.txt /tmp/packages_start.txt /w/workspace/policy-pap-master-project-csit-pap/archives/ 23:17:11 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins13420757807757666319.sh 23:17:11 ---> capture-instance-metadata.sh 23:17:11 Setup pyenv: 23:17:11 system 23:17:11 3.8.13 23:17:11 3.9.13 23:17:11 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) 23:17:11 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-UZJF from file:/tmp/.os_lf_venv 23:17:13 lf-activate-venv(): INFO: Installing: lftools 23:17:22 lf-activate-venv(): INFO: Adding /tmp/venv-UZJF/bin to PATH 23:17:22 INFO: Running in OpenStack, capturing instance metadata 23:17:22 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins11307437880716340917.sh 23:17:22 provisioning config files... 23:17:22 copy managed file [jenkins-log-archives-settings] to file:/w/workspace/policy-pap-master-project-csit-pap@tmp/config15630640435370640133tmp 23:17:22 Regular expression run condition: Expression=[^.*logs-s3.*], Label=[] 23:17:22 Run condition [Regular expression match] preventing perform for step [Provide Configuration files] 23:17:22 [EnvInject] - Injecting environment variables from a build step. 23:17:22 [EnvInject] - Injecting as environment variables the properties content 23:17:22 SERVER_ID=logs 23:17:22 23:17:22 [EnvInject] - Variables injected successfully. 23:17:22 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins8233644099759655755.sh 23:17:22 ---> create-netrc.sh 23:17:22 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins15740486244928104403.sh 23:17:22 ---> python-tools-install.sh 23:17:22 Setup pyenv: 23:17:23 system 23:17:23 3.8.13 23:17:23 3.9.13 23:17:23 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) 23:17:23 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-UZJF from file:/tmp/.os_lf_venv 23:17:24 lf-activate-venv(): INFO: Installing: lftools 23:17:32 lf-activate-venv(): INFO: Adding /tmp/venv-UZJF/bin to PATH 23:17:32 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins4547891000859951476.sh 23:17:32 ---> sudo-logs.sh 23:17:32 Archiving 'sudo' log.. 23:17:33 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins6171222220623560065.sh 23:17:33 ---> job-cost.sh 23:17:33 Setup pyenv: 23:17:33 system 23:17:33 3.8.13 23:17:33 3.9.13 23:17:33 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) 23:17:33 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-UZJF from file:/tmp/.os_lf_venv 23:17:34 lf-activate-venv(): INFO: Installing: zipp==1.1.0 python-openstackclient urllib3~=1.26.15 23:17:39 lf-activate-venv(): INFO: Adding /tmp/venv-UZJF/bin to PATH 23:17:39 INFO: No Stack... 23:17:39 INFO: Retrieving Pricing Info for: v3-standard-8 23:17:39 INFO: Archiving Costs 23:17:39 [policy-pap-master-project-csit-pap] $ /bin/bash -l /tmp/jenkins2693488987431473851.sh 23:17:39 ---> logs-deploy.sh 23:17:39 Setup pyenv: 23:17:39 system 23:17:39 3.8.13 23:17:39 3.9.13 23:17:39 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) 23:17:40 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-UZJF from file:/tmp/.os_lf_venv 23:17:41 lf-activate-venv(): INFO: Installing: lftools 23:17:49 lf-activate-venv(): INFO: Adding /tmp/venv-UZJF/bin to PATH 23:17:49 INFO: Nexus URL https://nexus.onap.org path production/vex-yul-ecomp-jenkins-1/policy-pap-master-project-csit-pap/1640 23:17:49 INFO: archiving workspace using pattern(s): -p **/target/surefire-reports/*-output.txt 23:17:51 Archives upload complete. 23:17:51 INFO: archiving logs to Nexus 23:17:52 ---> uname -a: 23:17:52 Linux prd-ubuntu1804-docker-8c-8g-22180 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux 23:17:52 23:17:52 23:17:52 ---> lscpu: 23:17:52 Architecture: x86_64 23:17:52 CPU op-mode(s): 32-bit, 64-bit 23:17:52 Byte Order: Little Endian 23:17:52 CPU(s): 8 23:17:52 On-line CPU(s) list: 0-7 23:17:52 Thread(s) per core: 1 23:17:52 Core(s) per socket: 1 23:17:52 Socket(s): 8 23:17:52 NUMA node(s): 1 23:17:52 Vendor ID: AuthenticAMD 23:17:52 CPU family: 23 23:17:52 Model: 49 23:17:52 Model name: AMD EPYC-Rome Processor 23:17:52 Stepping: 0 23:17:52 CPU MHz: 2799.998 23:17:52 BogoMIPS: 5599.99 23:17:52 Virtualization: AMD-V 23:17:52 Hypervisor vendor: KVM 23:17:52 Virtualization type: full 23:17:52 L1d cache: 32K 23:17:52 L1i cache: 32K 23:17:52 L2 cache: 512K 23:17:52 L3 cache: 16384K 23:17:52 NUMA node0 CPU(s): 0-7 23:17:52 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl xtopology cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr arat npt nrip_save umip rdpid arch_capabilities 23:17:52 23:17:52 23:17:52 ---> nproc: 23:17:52 8 23:17:52 23:17:52 23:17:52 ---> df -h: 23:17:52 Filesystem Size Used Avail Use% Mounted on 23:17:52 udev 16G 0 16G 0% /dev 23:17:52 tmpfs 3.2G 708K 3.2G 1% /run 23:17:52 /dev/vda1 155G 14G 142G 9% / 23:17:52 tmpfs 16G 0 16G 0% /dev/shm 23:17:52 tmpfs 5.0M 0 5.0M 0% /run/lock 23:17:52 tmpfs 16G 0 16G 0% /sys/fs/cgroup 23:17:52 /dev/vda15 105M 4.4M 100M 5% /boot/efi 23:17:52 tmpfs 3.2G 0 3.2G 0% /run/user/1001 23:17:52 23:17:52 23:17:52 ---> free -m: 23:17:52 total used free shared buff/cache available 23:17:52 Mem: 32167 822 25127 0 6217 30889 23:17:52 Swap: 1023 0 1023 23:17:52 23:17:52 23:17:52 ---> ip addr: 23:17:52 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 23:17:52 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 23:17:52 inet 127.0.0.1/8 scope host lo 23:17:52 valid_lft forever preferred_lft forever 23:17:52 inet6 ::1/128 scope host 23:17:52 valid_lft forever preferred_lft forever 23:17:52 2: ens3: mtu 1458 qdisc mq state UP group default qlen 1000 23:17:52 link/ether fa:16:3e:3a:c5:a5 brd ff:ff:ff:ff:ff:ff 23:17:52 inet 10.30.107.69/23 brd 10.30.107.255 scope global dynamic ens3 23:17:52 valid_lft 85949sec preferred_lft 85949sec 23:17:52 inet6 fe80::f816:3eff:fe3a:c5a5/64 scope link 23:17:52 valid_lft forever preferred_lft forever 23:17:52 3: docker0: mtu 1500 qdisc noqueue state DOWN group default 23:17:52 link/ether 02:42:0f:8b:e1:fa brd ff:ff:ff:ff:ff:ff 23:17:52 inet 10.250.0.254/24 brd 10.250.0.255 scope global docker0 23:17:52 valid_lft forever preferred_lft forever 23:17:52 23:17:52 23:17:52 ---> sar -b -r -n DEV: 23:17:52 Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-22180) 04/10/24 _x86_64_ (8 CPU) 23:17:52 23:17:52 23:10:24 LINUX RESTART (8 CPU) 23:17:52 23:17:52 23:11:01 tps rtps wtps bread/s bwrtn/s 23:17:52 23:12:01 115.63 35.54 80.09 1682.51 29625.32 23:17:52 23:13:01 130.77 23.12 107.65 2775.60 35101.07 23:17:52 23:14:01 313.16 2.88 310.28 432.12 155983.16 23:17:52 23:15:01 252.82 9.27 243.56 387.94 32103.13 23:17:52 23:16:01 19.45 0.00 19.45 0.00 23613.16 23:17:52 23:17:01 61.79 0.10 61.69 19.33 26051.02 23:17:52 Average: 148.94 11.82 137.12 882.87 50415.59 23:17:52 23:17:52 23:11:01 kbmemfree kbavail kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty 23:17:52 23:12:01 30124068 31699260 2815152 8.55 69884 1816616 1446076 4.25 873964 1652264 145948 23:17:52 23:13:01 28795488 31682852 4143732 12.58 100360 3058924 1549024 4.56 971460 2797908 1056216 23:17:52 23:14:01 25395932 31316548 7543288 22.90 144048 5895260 5843920 17.19 1416388 5558052 840 23:17:52 23:15:01 23417512 29476276 9521708 28.91 157628 6008100 9001632 26.48 3392508 5521228 224 23:17:52 23:16:01 23373564 29433060 9565656 29.04 157844 6008384 8963552 26.37 3438852 5518864 268 23:17:52 23:17:01 25006212 31083772 7933008 24.08 158660 6035880 2453548 7.22 1860484 5515100 52 23:17:52 Average: 26018796 30781961 6920424 21.01 131404 4803861 4876292 14.35 1992276 4427236 200591 23:17:52 23:17:52 23:11:01 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil 23:17:52 23:12:01 ens3 61.85 43.27 834.77 9.90 0.00 0.00 0.00 0.00 23:17:52 23:12:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:17:52 23:12:01 lo 1.60 1.60 0.18 0.18 0.00 0.00 0.00 0.00 23:17:52 23:13:01 ens3 262.58 173.68 6522.60 16.88 0.00 0.00 0.00 0.00 23:17:52 23:13:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:17:52 23:13:01 lo 7.40 7.40 0.70 0.70 0.00 0.00 0.00 0.00 23:17:52 23:13:01 br-dc70bb6e93b6 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:17:52 23:14:01 veth48ef517 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:17:52 23:14:01 veth6bc0ae4 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:17:52 23:14:01 vethdf079f3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:17:52 23:14:01 vethcb703f0 0.03 0.10 0.00 0.01 0.00 0.00 0.00 0.00 23:17:52 23:15:01 vethf0df7d5 7.33 7.25 1.25 0.75 0.00 0.00 0.00 0.00 23:17:52 23:15:01 veth3b34ccd 0.55 0.82 0.06 0.30 0.00 0.00 0.00 0.00 23:17:52 23:15:01 vethcb703f0 76.12 92.12 41.92 23.23 0.00 0.00 0.00 0.00 23:17:52 23:15:01 vethcee2622 5.28 6.73 0.83 0.94 0.00 0.00 0.00 0.00 23:17:52 23:16:01 vethf0df7d5 45.88 42.84 11.28 36.81 0.00 0.00 0.00 0.00 23:17:52 23:16:01 veth3b34ccd 0.25 0.18 0.02 0.01 0.00 0.00 0.00 0.00 23:17:52 23:16:01 vethcb703f0 30.86 37.26 35.67 8.54 0.00 0.00 0.00 0.00 23:17:52 23:16:01 vethcee2622 0.17 0.35 0.01 0.02 0.00 0.00 0.00 0.00 23:17:52 23:17:01 ens3 1713.56 902.30 33958.85 133.33 0.00 0.00 0.00 0.00 23:17:52 23:17:01 veth8acf67a 54.14 48.43 20.47 40.51 0.00 0.00 0.00 0.00 23:17:52 23:17:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:17:52 23:17:01 lo 35.44 35.44 6.25 6.25 0.00 0.00 0.00 0.00 23:17:52 Average: ens3 235.76 117.33 5532.53 13.39 0.00 0.00 0.00 0.00 23:17:52 Average: veth8acf67a 9.02 8.07 3.41 6.75 0.00 0.00 0.00 0.00 23:17:52 Average: docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:17:52 Average: lo 5.25 5.25 0.99 0.99 0.00 0.00 0.00 0.00 23:17:52 23:17:52 23:17:52 ---> sar -P ALL: 23:17:52 Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-22180) 04/10/24 _x86_64_ (8 CPU) 23:17:52 23:17:52 23:10:24 LINUX RESTART (8 CPU) 23:17:52 23:17:52 23:11:01 CPU %user %nice %system %iowait %steal %idle 23:17:52 23:12:01 all 10.07 0.00 0.80 2.23 0.04 86.86 23:17:52 23:12:01 0 2.28 0.00 0.32 0.15 0.03 97.22 23:17:52 23:12:01 1 2.01 0.00 0.52 14.32 0.03 83.12 23:17:52 23:12:01 2 27.97 0.00 1.79 1.34 0.07 68.84 23:17:52 23:12:01 3 15.61 0.00 1.04 0.36 0.03 82.95 23:17:52 23:12:01 4 13.38 0.00 0.92 0.48 0.10 85.12 23:17:52 23:12:01 5 7.01 0.00 0.67 0.35 0.02 91.95 23:17:52 23:12:01 6 5.59 0.00 0.68 0.35 0.02 93.36 23:17:52 23:12:01 7 6.69 0.00 0.43 0.58 0.02 92.28 23:17:52 23:13:01 all 11.13 0.00 1.94 1.73 0.04 85.16 23:17:52 23:13:01 0 8.65 0.00 2.03 1.46 0.05 87.81 23:17:52 23:13:01 1 8.72 0.00 1.68 7.14 0.08 82.37 23:17:52 23:13:01 2 7.66 0.00 1.37 0.40 0.02 90.55 23:17:52 23:13:01 3 3.25 0.00 1.37 0.92 0.03 94.42 23:17:52 23:13:01 4 8.90 0.00 3.23 0.18 0.03 87.65 23:17:52 23:13:01 5 26.24 0.00 2.21 1.05 0.07 70.43 23:17:52 23:13:01 6 14.76 0.00 2.06 2.29 0.03 80.86 23:17:52 23:13:01 7 10.81 0.00 1.57 0.38 0.03 87.21 23:17:52 23:14:01 all 13.43 0.00 6.13 6.23 0.07 74.13 23:17:52 23:14:01 0 14.20 0.00 7.25 1.77 0.08 76.70 23:17:52 23:14:01 1 11.47 0.00 6.57 0.22 0.05 81.69 23:17:52 23:14:01 2 13.20 0.00 6.60 20.70 0.10 59.39 23:17:52 23:14:01 3 12.57 0.00 5.06 0.80 0.07 81.50 23:17:52 23:14:01 4 13.18 0.00 6.56 5.74 0.07 74.46 23:17:52 23:14:01 5 13.29 0.00 4.98 0.66 0.07 80.99 23:17:52 23:14:01 6 14.96 0.00 5.95 11.21 0.08 67.79 23:17:52 23:14:01 7 14.59 0.00 6.13 8.80 0.07 70.41 23:17:52 23:15:01 all 28.06 0.00 3.51 1.83 0.09 66.51 23:17:52 23:15:01 0 27.88 0.00 3.91 4.97 0.10 63.14 23:17:52 23:15:01 1 30.12 0.00 3.94 0.67 0.10 65.16 23:17:52 23:15:01 2 25.96 0.00 3.52 0.59 0.08 69.84 23:17:52 23:15:01 3 20.78 0.00 2.34 1.11 0.07 75.70 23:17:52 23:15:01 4 33.96 0.00 3.97 2.08 0.08 59.90 23:17:52 23:15:01 5 28.97 0.00 3.51 1.80 0.08 65.64 23:17:52 23:15:01 6 31.18 0.00 3.74 1.81 0.10 63.16 23:17:52 23:15:01 7 25.73 0.00 3.21 1.59 0.07 69.40 23:17:52 23:16:01 all 4.70 0.00 0.42 0.83 0.06 93.99 23:17:52 23:16:01 0 3.79 0.00 0.38 6.28 0.08 89.46 23:17:52 23:16:01 1 4.36 0.00 0.38 0.00 0.03 95.23 23:17:52 23:16:01 2 6.15 0.00 0.57 0.07 0.05 93.16 23:17:52 23:16:01 3 4.65 0.00 0.30 0.02 0.03 95.00 23:17:52 23:16:01 4 5.47 0.00 0.45 0.18 0.05 93.84 23:17:52 23:16:01 5 3.43 0.00 0.42 0.08 0.05 96.02 23:17:52 23:16:01 6 5.14 0.00 0.58 0.02 0.08 94.17 23:17:52 23:16:01 7 4.60 0.00 0.28 0.00 0.08 95.03 23:17:52 23:17:01 all 1.55 0.00 0.57 1.06 0.05 96.76 23:17:52 23:17:01 0 0.99 0.00 0.64 7.04 0.07 91.27 23:17:52 23:17:01 1 0.99 0.00 0.52 0.07 0.05 98.38 23:17:52 23:17:01 2 1.32 0.00 0.75 0.28 0.05 97.59 23:17:52 23:17:01 3 2.66 0.00 0.53 0.02 0.03 96.77 23:17:52 23:17:01 4 1.39 0.00 0.57 0.03 0.03 97.98 23:17:52 23:17:01 5 1.15 0.00 0.57 0.45 0.05 97.78 23:17:52 23:17:01 6 2.66 0.00 0.59 0.32 0.08 96.35 23:17:52 23:17:01 7 1.25 0.00 0.37 0.28 0.03 98.06 23:17:52 Average: all 11.47 0.00 2.22 2.31 0.06 83.94 23:17:52 Average: 0 9.61 0.00 2.41 3.61 0.07 84.30 23:17:52 Average: 1 9.59 0.00 2.26 3.74 0.06 84.34 23:17:52 Average: 2 13.70 0.00 2.42 3.83 0.06 79.98 23:17:52 Average: 3 9.91 0.00 1.76 0.54 0.04 87.76 23:17:52 Average: 4 12.70 0.00 2.60 1.44 0.06 83.20 23:17:52 Average: 5 13.33 0.00 2.05 0.73 0.06 83.83 23:17:52 Average: 6 12.36 0.00 2.26 2.65 0.07 82.66 23:17:52 Average: 7 10.59 0.00 1.99 1.93 0.05 85.44 23:17:52 23:17:52 23:17:52