23:10:57 Started by timer 23:10:57 Running as SYSTEM 23:10:57 [EnvInject] - Loading node environment variables. 23:10:57 Building remotely on prd-ubuntu1804-docker-8c-8g-12229 (ubuntu1804-docker-8c-8g) in workspace /w/workspace/policy-pap-master-project-csit-pap 23:10:57 [ssh-agent] Looking for ssh-agent implementation... 23:10:57 [ssh-agent] Exec ssh-agent (binary ssh-agent on a remote machine) 23:10:57 $ ssh-agent 23:10:57 SSH_AUTH_SOCK=/tmp/ssh-WSFed9p5VSMr/agent.2088 23:10:57 SSH_AGENT_PID=2090 23:10:57 [ssh-agent] Started. 23:10:57 Running ssh-add (command line suppressed) 23:10:57 Identity added: /w/workspace/policy-pap-master-project-csit-pap@tmp/private_key_14127637394373633275.key (/w/workspace/policy-pap-master-project-csit-pap@tmp/private_key_14127637394373633275.key) 23:10:57 [ssh-agent] Using credentials onap-jobbuiler (Gerrit user) 23:10:57 The recommended git tool is: NONE 23:10:59 using credential onap-jenkins-ssh 23:10:59 Wiping out workspace first. 23:10:59 Cloning the remote Git repository 23:10:59 Cloning repository git://cloud.onap.org/mirror/policy/docker.git 23:10:59 > git init /w/workspace/policy-pap-master-project-csit-pap # timeout=10 23:10:59 Fetching upstream changes from git://cloud.onap.org/mirror/policy/docker.git 23:10:59 > git --version # timeout=10 23:10:59 > git --version # 'git version 2.17.1' 23:10:59 using GIT_SSH to set credentials Gerrit user 23:10:59 Verifying host key using manually-configured host key entries 23:10:59 > git fetch --tags --progress -- git://cloud.onap.org/mirror/policy/docker.git +refs/heads/*:refs/remotes/origin/* # timeout=30 23:10:59 > git config remote.origin.url git://cloud.onap.org/mirror/policy/docker.git # timeout=10 23:10:59 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10 23:11:00 Avoid second fetch 23:11:00 > git rev-parse refs/remotes/origin/master^{commit} # timeout=10 23:11:00 Checking out Revision 5582cd406c8414919c4d5d7f5b116f4f1e5a971d (refs/remotes/origin/master) 23:11:00 > git config core.sparsecheckout # timeout=10 23:11:00 > git checkout -f 5582cd406c8414919c4d5d7f5b116f4f1e5a971d # timeout=30 23:11:00 Commit message: "Merge "Add ACM regression test suite"" 23:11:00 > git rev-list --no-walk 5582cd406c8414919c4d5d7f5b116f4f1e5a971d # timeout=10 23:11:00 provisioning config files... 23:11:00 copy managed file [npmrc] to file:/home/jenkins/.npmrc 23:11:00 copy managed file [pipconf] to file:/home/jenkins/.config/pip/pip.conf 23:11:00 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins11842791028681749633.sh 23:11:00 ---> python-tools-install.sh 23:11:00 Setup pyenv: 23:11:00 * system (set by /opt/pyenv/version) 23:11:00 * 3.8.13 (set by /opt/pyenv/version) 23:11:00 * 3.9.13 (set by /opt/pyenv/version) 23:11:00 * 3.10.6 (set by /opt/pyenv/version) 23:11:04 lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-T1G5 23:11:04 lf-activate-venv(): INFO: Save venv in file: /tmp/.os_lf_venv 23:11:07 lf-activate-venv(): INFO: Installing: lftools 23:11:40 lf-activate-venv(): INFO: Adding /tmp/venv-T1G5/bin to PATH 23:11:40 Generating Requirements File 23:12:06 Python 3.10.6 23:12:07 pip 24.0 from /tmp/venv-T1G5/lib/python3.10/site-packages/pip (python 3.10) 23:12:07 appdirs==1.4.4 23:12:07 argcomplete==3.2.3 23:12:07 aspy.yaml==1.3.0 23:12:07 attrs==23.2.0 23:12:07 autopage==0.5.2 23:12:07 beautifulsoup4==4.12.3 23:12:07 boto3==1.34.59 23:12:07 botocore==1.34.59 23:12:07 bs4==0.0.2 23:12:07 cachetools==5.3.3 23:12:07 certifi==2024.2.2 23:12:07 cffi==1.16.0 23:12:07 cfgv==3.4.0 23:12:07 chardet==5.2.0 23:12:07 charset-normalizer==3.3.2 23:12:07 click==8.1.7 23:12:07 cliff==4.6.0 23:12:07 cmd2==2.4.3 23:12:07 cryptography==3.3.2 23:12:07 debtcollector==3.0.0 23:12:07 decorator==5.1.1 23:12:07 defusedxml==0.7.1 23:12:07 Deprecated==1.2.14 23:12:07 distlib==0.3.8 23:12:07 dnspython==2.6.1 23:12:07 docker==4.2.2 23:12:07 dogpile.cache==1.3.2 23:12:07 email_validator==2.1.1 23:12:07 filelock==3.13.1 23:12:07 future==1.0.0 23:12:07 gitdb==4.0.11 23:12:07 GitPython==3.1.42 23:12:07 google-auth==2.28.2 23:12:07 httplib2==0.22.0 23:12:07 identify==2.5.35 23:12:07 idna==3.6 23:12:07 importlib-resources==1.5.0 23:12:07 iso8601==2.1.0 23:12:07 Jinja2==3.1.3 23:12:07 jmespath==1.0.1 23:12:07 jsonpatch==1.33 23:12:07 jsonpointer==2.4 23:12:07 jsonschema==4.21.1 23:12:07 jsonschema-specifications==2023.12.1 23:12:07 keystoneauth1==5.6.0 23:12:07 kubernetes==29.0.0 23:12:07 lftools==0.37.9 23:12:07 lxml==5.1.0 23:12:07 MarkupSafe==2.1.5 23:12:07 msgpack==1.0.8 23:12:07 multi_key_dict==2.0.3 23:12:07 netaddr==1.2.1 23:12:07 netifaces==0.11.0 23:12:07 niet==1.4.2 23:12:07 nodeenv==1.8.0 23:12:07 oauth2client==4.1.3 23:12:07 oauthlib==3.2.2 23:12:07 openstacksdk==3.0.0 23:12:07 os-client-config==2.1.0 23:12:07 os-service-types==1.7.0 23:12:07 osc-lib==3.0.1 23:12:07 oslo.config==9.4.0 23:12:07 oslo.context==5.5.0 23:12:07 oslo.i18n==6.3.0 23:12:07 oslo.log==5.5.0 23:12:07 oslo.serialization==5.4.0 23:12:07 oslo.utils==7.1.0 23:12:07 packaging==23.2 23:12:07 pbr==6.0.0 23:12:07 platformdirs==4.2.0 23:12:07 prettytable==3.10.0 23:12:07 pyasn1==0.5.1 23:12:07 pyasn1-modules==0.3.0 23:12:07 pycparser==2.21 23:12:07 pygerrit2==2.0.15 23:12:07 PyGithub==2.2.0 23:12:07 pyinotify==0.9.6 23:12:07 PyJWT==2.8.0 23:12:07 PyNaCl==1.5.0 23:12:07 pyparsing==2.4.7 23:12:07 pyperclip==1.8.2 23:12:07 pyrsistent==0.20.0 23:12:07 python-cinderclient==9.5.0 23:12:07 python-dateutil==2.9.0.post0 23:12:07 python-heatclient==3.5.0 23:12:07 python-jenkins==1.8.2 23:12:07 python-keystoneclient==5.4.0 23:12:07 python-magnumclient==4.4.0 23:12:07 python-novaclient==18.5.0 23:12:07 python-openstackclient==6.5.0 23:12:07 python-swiftclient==4.5.0 23:12:07 PyYAML==6.0.1 23:12:07 referencing==0.33.0 23:12:07 requests==2.31.0 23:12:07 requests-oauthlib==1.3.1 23:12:07 requestsexceptions==1.4.0 23:12:07 rfc3986==2.0.0 23:12:07 rpds-py==0.18.0 23:12:07 rsa==4.9 23:12:07 ruamel.yaml==0.18.6 23:12:07 ruamel.yaml.clib==0.2.8 23:12:07 s3transfer==0.10.0 23:12:07 simplejson==3.19.2 23:12:07 six==1.16.0 23:12:07 smmap==5.0.1 23:12:07 soupsieve==2.5 23:12:07 stevedore==5.2.0 23:12:07 tabulate==0.9.0 23:12:07 toml==0.10.2 23:12:07 tomlkit==0.12.4 23:12:07 tqdm==4.66.2 23:12:07 typing_extensions==4.10.0 23:12:07 tzdata==2024.1 23:12:07 urllib3==1.26.18 23:12:07 virtualenv==20.25.1 23:12:07 wcwidth==0.2.13 23:12:07 websocket-client==1.7.0 23:12:07 wrapt==1.16.0 23:12:07 xdg==6.0.0 23:12:07 xmltodict==0.13.0 23:12:07 yq==3.2.3 23:12:07 [EnvInject] - Injecting environment variables from a build step. 23:12:07 [EnvInject] - Injecting as environment variables the properties content 23:12:07 SET_JDK_VERSION=openjdk17 23:12:07 GIT_URL="git://cloud.onap.org/mirror" 23:12:07 23:12:07 [EnvInject] - Variables injected successfully. 23:12:07 [policy-pap-master-project-csit-pap] $ /bin/sh /tmp/jenkins7774432514495733646.sh 23:12:07 ---> update-java-alternatives.sh 23:12:07 ---> Updating Java version 23:12:07 ---> Ubuntu/Debian system detected 23:12:08 update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/java to provide /usr/bin/java (java) in manual mode 23:12:08 update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/javac to provide /usr/bin/javac (javac) in manual mode 23:12:08 update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64 to provide /usr/lib/jvm/java-openjdk (java_sdk_openjdk) in manual mode 23:12:08 openjdk version "17.0.4" 2022-07-19 23:12:08 OpenJDK Runtime Environment (build 17.0.4+8-Ubuntu-118.04) 23:12:08 OpenJDK 64-Bit Server VM (build 17.0.4+8-Ubuntu-118.04, mixed mode, sharing) 23:12:08 JAVA_HOME=/usr/lib/jvm/java-17-openjdk-amd64 23:12:08 [EnvInject] - Injecting environment variables from a build step. 23:12:08 [EnvInject] - Injecting as environment variables the properties file path '/tmp/java.env' 23:12:08 [EnvInject] - Variables injected successfully. 23:12:08 [policy-pap-master-project-csit-pap] $ /bin/sh -xe /tmp/jenkins11963041503437401725.sh 23:12:08 + /w/workspace/policy-pap-master-project-csit-pap/csit/run-project-csit.sh pap 23:12:08 + set +u 23:12:08 + save_set 23:12:08 + RUN_CSIT_SAVE_SET=ehxB 23:12:08 + RUN_CSIT_SHELLOPTS=braceexpand:errexit:hashall:interactive-comments:pipefail:xtrace 23:12:08 + '[' 1 -eq 0 ']' 23:12:08 + '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' 23:12:08 + export PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 23:12:08 + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 23:12:08 + export SCRIPTS=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts 23:12:08 + SCRIPTS=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts 23:12:08 + export ROBOT_VARIABLES= 23:12:08 + ROBOT_VARIABLES= 23:12:08 + export PROJECT=pap 23:12:08 + PROJECT=pap 23:12:08 + cd /w/workspace/policy-pap-master-project-csit-pap 23:12:08 + rm -rf /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap 23:12:08 + mkdir -p /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap 23:12:08 + source_safely /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/prepare-robot-env.sh 23:12:08 + '[' -z /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/prepare-robot-env.sh ']' 23:12:08 + relax_set 23:12:08 + set +e 23:12:08 + set +o pipefail 23:12:08 + . /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/prepare-robot-env.sh 23:12:08 ++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' 23:12:08 +++ mktemp -d 23:12:08 ++ ROBOT_VENV=/tmp/tmp.XTvpul9jaB 23:12:08 ++ echo ROBOT_VENV=/tmp/tmp.XTvpul9jaB 23:12:08 +++ python3 --version 23:12:08 ++ echo 'Python version is: Python 3.6.9' 23:12:08 Python version is: Python 3.6.9 23:12:08 ++ python3 -m venv --clear /tmp/tmp.XTvpul9jaB 23:12:09 ++ source /tmp/tmp.XTvpul9jaB/bin/activate 23:12:09 +++ deactivate nondestructive 23:12:09 +++ '[' -n '' ']' 23:12:09 +++ '[' -n '' ']' 23:12:09 +++ '[' -n /bin/bash -o -n '' ']' 23:12:09 +++ hash -r 23:12:09 +++ '[' -n '' ']' 23:12:09 +++ unset VIRTUAL_ENV 23:12:09 +++ '[' '!' nondestructive = nondestructive ']' 23:12:09 +++ VIRTUAL_ENV=/tmp/tmp.XTvpul9jaB 23:12:09 +++ export VIRTUAL_ENV 23:12:09 +++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 23:12:09 +++ PATH=/tmp/tmp.XTvpul9jaB/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 23:12:09 +++ export PATH 23:12:09 +++ '[' -n '' ']' 23:12:09 +++ '[' -z '' ']' 23:12:09 +++ _OLD_VIRTUAL_PS1= 23:12:09 +++ '[' 'x(tmp.XTvpul9jaB) ' '!=' x ']' 23:12:09 +++ PS1='(tmp.XTvpul9jaB) ' 23:12:09 +++ export PS1 23:12:09 +++ '[' -n /bin/bash -o -n '' ']' 23:12:09 +++ hash -r 23:12:09 ++ set -exu 23:12:09 ++ python3 -m pip install -qq --upgrade 'pip<=23.0' 'setuptools<=66.1.1' 23:12:13 ++ echo 'Installing Python Requirements' 23:12:13 Installing Python Requirements 23:12:13 ++ python3 -m pip install -qq -r /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/pylibs.txt 23:12:31 ++ python3 -m pip -qq freeze 23:12:32 bcrypt==4.0.1 23:12:32 beautifulsoup4==4.12.3 23:12:32 bitarray==2.9.2 23:12:32 certifi==2024.2.2 23:12:32 cffi==1.15.1 23:12:32 charset-normalizer==2.0.12 23:12:32 cryptography==40.0.2 23:12:32 decorator==5.1.1 23:12:32 elasticsearch==7.17.9 23:12:32 elasticsearch-dsl==7.4.1 23:12:32 enum34==1.1.10 23:12:32 idna==3.6 23:12:32 importlib-resources==5.4.0 23:12:32 ipaddr==2.2.0 23:12:32 isodate==0.6.1 23:12:32 jmespath==0.10.0 23:12:32 jsonpatch==1.32 23:12:32 jsonpath-rw==1.4.0 23:12:32 jsonpointer==2.3 23:12:32 lxml==5.1.0 23:12:32 netaddr==0.8.0 23:12:32 netifaces==0.11.0 23:12:32 odltools==0.1.28 23:12:32 paramiko==3.4.0 23:12:32 pkg_resources==0.0.0 23:12:32 ply==3.11 23:12:32 pyang==2.6.0 23:12:32 pyangbind==0.8.1 23:12:32 pycparser==2.21 23:12:32 pyhocon==0.3.60 23:12:32 PyNaCl==1.5.0 23:12:32 pyparsing==3.1.2 23:12:32 python-dateutil==2.9.0.post0 23:12:32 regex==2023.8.8 23:12:32 requests==2.27.1 23:12:32 robotframework==6.1.1 23:12:32 robotframework-httplibrary==0.4.2 23:12:32 robotframework-pythonlibcore==3.0.0 23:12:32 robotframework-requests==0.9.4 23:12:32 robotframework-selenium2library==3.0.0 23:12:32 robotframework-seleniumlibrary==5.1.3 23:12:32 robotframework-sshlibrary==3.8.0 23:12:32 scapy==2.5.0 23:12:32 scp==0.14.5 23:12:32 selenium==3.141.0 23:12:32 six==1.16.0 23:12:32 soupsieve==2.3.2.post1 23:12:32 urllib3==1.26.18 23:12:32 waitress==2.0.0 23:12:32 WebOb==1.8.7 23:12:32 WebTest==3.0.0 23:12:32 zipp==3.6.0 23:12:32 ++ mkdir -p /tmp/tmp.XTvpul9jaB/src/onap 23:12:32 ++ rm -rf /tmp/tmp.XTvpul9jaB/src/onap/testsuite 23:12:32 ++ python3 -m pip install -qq --upgrade --extra-index-url=https://nexus3.onap.org/repository/PyPi.staging/simple 'robotframework-onap==0.6.0.*' --pre 23:12:37 ++ echo 'Installing python confluent-kafka library' 23:12:37 Installing python confluent-kafka library 23:12:37 ++ python3 -m pip install -qq confluent-kafka 23:12:38 ++ echo 'Uninstall docker-py and reinstall docker.' 23:12:38 Uninstall docker-py and reinstall docker. 23:12:38 ++ python3 -m pip uninstall -y -qq docker 23:12:39 ++ python3 -m pip install -U -qq docker 23:12:40 ++ python3 -m pip -qq freeze 23:12:40 bcrypt==4.0.1 23:12:40 beautifulsoup4==4.12.3 23:12:40 bitarray==2.9.2 23:12:40 certifi==2024.2.2 23:12:40 cffi==1.15.1 23:12:40 charset-normalizer==2.0.12 23:12:40 confluent-kafka==2.3.0 23:12:40 cryptography==40.0.2 23:12:40 decorator==5.1.1 23:12:40 deepdiff==5.7.0 23:12:40 dnspython==2.2.1 23:12:40 docker==5.0.3 23:12:40 elasticsearch==7.17.9 23:12:40 elasticsearch-dsl==7.4.1 23:12:40 enum34==1.1.10 23:12:40 future==1.0.0 23:12:40 idna==3.6 23:12:40 importlib-resources==5.4.0 23:12:40 ipaddr==2.2.0 23:12:40 isodate==0.6.1 23:12:40 Jinja2==3.0.3 23:12:40 jmespath==0.10.0 23:12:40 jsonpatch==1.32 23:12:40 jsonpath-rw==1.4.0 23:12:40 jsonpointer==2.3 23:12:40 kafka-python==2.0.2 23:12:40 lxml==5.1.0 23:12:40 MarkupSafe==2.0.1 23:12:40 more-itertools==5.0.0 23:12:40 netaddr==0.8.0 23:12:40 netifaces==0.11.0 23:12:40 odltools==0.1.28 23:12:40 ordered-set==4.0.2 23:12:40 paramiko==3.4.0 23:12:40 pbr==6.0.0 23:12:40 pkg_resources==0.0.0 23:12:40 ply==3.11 23:12:40 protobuf==3.19.6 23:12:40 pyang==2.6.0 23:12:40 pyangbind==0.8.1 23:12:40 pycparser==2.21 23:12:40 pyhocon==0.3.60 23:12:40 PyNaCl==1.5.0 23:12:40 pyparsing==3.1.2 23:12:40 python-dateutil==2.9.0.post0 23:12:40 PyYAML==6.0.1 23:12:40 regex==2023.8.8 23:12:40 requests==2.27.1 23:12:40 robotframework==6.1.1 23:12:40 robotframework-httplibrary==0.4.2 23:12:40 robotframework-onap==0.6.0.dev105 23:12:40 robotframework-pythonlibcore==3.0.0 23:12:40 robotframework-requests==0.9.4 23:12:40 robotframework-selenium2library==3.0.0 23:12:40 robotframework-seleniumlibrary==5.1.3 23:12:40 robotframework-sshlibrary==3.8.0 23:12:40 robotlibcore-temp==1.0.2 23:12:40 scapy==2.5.0 23:12:40 scp==0.14.5 23:12:40 selenium==3.141.0 23:12:40 six==1.16.0 23:12:40 soupsieve==2.3.2.post1 23:12:40 urllib3==1.26.18 23:12:40 waitress==2.0.0 23:12:40 WebOb==1.8.7 23:12:40 websocket-client==1.3.1 23:12:40 WebTest==3.0.0 23:12:40 zipp==3.6.0 23:12:40 ++ uname 23:12:40 ++ grep -q Linux 23:12:40 ++ sudo apt-get -y -qq install libxml2-utils 23:12:40 + load_set 23:12:40 + _setopts=ehuxB 23:12:40 ++ echo braceexpand:hashall:interactive-comments:nounset:xtrace 23:12:40 ++ tr : ' ' 23:12:40 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:40 + set +o braceexpand 23:12:40 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:40 + set +o hashall 23:12:40 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:40 + set +o interactive-comments 23:12:40 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:40 + set +o nounset 23:12:40 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:40 + set +o xtrace 23:12:40 ++ echo ehuxB 23:12:40 ++ sed 's/./& /g' 23:12:40 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:12:40 + set +e 23:12:40 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:12:40 + set +h 23:12:40 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:12:40 + set +u 23:12:40 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:12:40 + set +x 23:12:40 + source_safely /tmp/tmp.XTvpul9jaB/bin/activate 23:12:40 + '[' -z /tmp/tmp.XTvpul9jaB/bin/activate ']' 23:12:40 + relax_set 23:12:40 + set +e 23:12:40 + set +o pipefail 23:12:40 + . /tmp/tmp.XTvpul9jaB/bin/activate 23:12:40 ++ deactivate nondestructive 23:12:40 ++ '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin ']' 23:12:40 ++ PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 23:12:40 ++ export PATH 23:12:40 ++ unset _OLD_VIRTUAL_PATH 23:12:40 ++ '[' -n '' ']' 23:12:40 ++ '[' -n /bin/bash -o -n '' ']' 23:12:40 ++ hash -r 23:12:40 ++ '[' -n '' ']' 23:12:40 ++ unset VIRTUAL_ENV 23:12:40 ++ '[' '!' nondestructive = nondestructive ']' 23:12:40 ++ VIRTUAL_ENV=/tmp/tmp.XTvpul9jaB 23:12:40 ++ export VIRTUAL_ENV 23:12:40 ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 23:12:40 ++ PATH=/tmp/tmp.XTvpul9jaB/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 23:12:40 ++ export PATH 23:12:40 ++ '[' -n '' ']' 23:12:40 ++ '[' -z '' ']' 23:12:40 ++ _OLD_VIRTUAL_PS1='(tmp.XTvpul9jaB) ' 23:12:40 ++ '[' 'x(tmp.XTvpul9jaB) ' '!=' x ']' 23:12:40 ++ PS1='(tmp.XTvpul9jaB) (tmp.XTvpul9jaB) ' 23:12:40 ++ export PS1 23:12:40 ++ '[' -n /bin/bash -o -n '' ']' 23:12:40 ++ hash -r 23:12:40 + load_set 23:12:40 + _setopts=hxB 23:12:40 ++ echo braceexpand:hashall:interactive-comments:xtrace 23:12:40 ++ tr : ' ' 23:12:40 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:40 + set +o braceexpand 23:12:40 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:40 + set +o hashall 23:12:40 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:40 + set +o interactive-comments 23:12:40 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:40 + set +o xtrace 23:12:40 ++ echo hxB 23:12:40 ++ sed 's/./& /g' 23:12:40 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:12:40 + set +h 23:12:40 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:12:40 + set +x 23:12:40 + export TEST_PLAN_DIR=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests 23:12:40 + TEST_PLAN_DIR=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests 23:12:40 + export TEST_OPTIONS= 23:12:41 + TEST_OPTIONS= 23:12:41 ++ mktemp -d 23:12:41 + WORKDIR=/tmp/tmp.esLYy6xppR 23:12:41 + cd /tmp/tmp.esLYy6xppR 23:12:41 + docker login -u docker -p docker nexus3.onap.org:10001 23:12:41 WARNING! Using --password via the CLI is insecure. Use --password-stdin. 23:12:41 WARNING! Your password will be stored unencrypted in /home/jenkins/.docker/config.json. 23:12:41 Configure a credential helper to remove this warning. See 23:12:41 https://docs.docker.com/engine/reference/commandline/login/#credentials-store 23:12:41 23:12:41 Login Succeeded 23:12:41 + SETUP=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh 23:12:41 + '[' -f /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh ']' 23:12:41 + echo 'Running setup script /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh' 23:12:41 Running setup script /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh 23:12:41 + source_safely /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh 23:12:41 + '[' -z /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh ']' 23:12:41 + relax_set 23:12:41 + set +e 23:12:41 + set +o pipefail 23:12:41 + . /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh 23:12:41 ++ source /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/node-templates.sh 23:12:41 +++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' 23:12:41 ++++ awk -F= '$1 == "defaultbranch" { print $2 }' /w/workspace/policy-pap-master-project-csit-pap/.gitreview 23:12:41 +++ GERRIT_BRANCH=master 23:12:41 +++ echo GERRIT_BRANCH=master 23:12:41 GERRIT_BRANCH=master 23:12:41 +++ rm -rf /w/workspace/policy-pap-master-project-csit-pap/models 23:12:41 +++ mkdir /w/workspace/policy-pap-master-project-csit-pap/models 23:12:41 +++ git clone -b master --single-branch https://github.com/onap/policy-models.git /w/workspace/policy-pap-master-project-csit-pap/models 23:12:41 Cloning into '/w/workspace/policy-pap-master-project-csit-pap/models'... 23:12:42 +++ export DATA=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies 23:12:42 +++ DATA=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies 23:12:42 +++ export NODETEMPLATES=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates 23:12:42 +++ NODETEMPLATES=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates 23:12:42 +++ sed -e 's!Measurement_vGMUX!ADifferentValue!' /w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies/vCPE.policy.monitoring.input.tosca.json 23:12:42 +++ sed -e 's!"version": "1.0.0"!"version": "2.0.0"!' -e 's!"policy-version": 1!"policy-version": 2!' /w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies/vCPE.policy.monitoring.input.tosca.json 23:12:42 ++ source /w/workspace/policy-pap-master-project-csit-pap/compose/start-compose.sh apex-pdp --grafana 23:12:42 +++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' 23:12:42 +++ COMPOSE_FOLDER=/w/workspace/policy-pap-master-project-csit-pap/compose 23:12:42 +++ grafana=false 23:12:42 +++ gui=false 23:12:42 +++ [[ 2 -gt 0 ]] 23:12:42 +++ key=apex-pdp 23:12:42 +++ case $key in 23:12:42 +++ echo apex-pdp 23:12:42 apex-pdp 23:12:42 +++ component=apex-pdp 23:12:42 +++ shift 23:12:42 +++ [[ 1 -gt 0 ]] 23:12:42 +++ key=--grafana 23:12:42 +++ case $key in 23:12:42 +++ grafana=true 23:12:42 +++ shift 23:12:42 +++ [[ 0 -gt 0 ]] 23:12:42 +++ cd /w/workspace/policy-pap-master-project-csit-pap/compose 23:12:42 +++ echo 'Configuring docker compose...' 23:12:42 Configuring docker compose... 23:12:42 +++ source export-ports.sh 23:12:42 +++ source get-versions.sh 23:12:44 +++ '[' -z pap ']' 23:12:44 +++ '[' -n apex-pdp ']' 23:12:44 +++ '[' apex-pdp == logs ']' 23:12:44 +++ '[' true = true ']' 23:12:44 +++ echo 'Starting apex-pdp application with Grafana' 23:12:44 Starting apex-pdp application with Grafana 23:12:44 +++ docker-compose up -d apex-pdp grafana 23:12:45 Creating network "compose_default" with the default driver 23:12:45 Pulling prometheus (nexus3.onap.org:10001/prom/prometheus:latest)... 23:12:45 latest: Pulling from prom/prometheus 23:12:48 Digest: sha256:bc1794e85c9e00293351b967efa267ce6af1c824ac875a9d0c7ac84700a8b53e 23:12:48 Status: Downloaded newer image for nexus3.onap.org:10001/prom/prometheus:latest 23:12:48 Pulling grafana (nexus3.onap.org:10001/grafana/grafana:latest)... 23:12:48 latest: Pulling from grafana/grafana 23:12:53 Digest: sha256:f9811e4e687ffecf1a43adb9b64096c50bc0d7a782f8608530f478b6542de7d5 23:12:53 Status: Downloaded newer image for nexus3.onap.org:10001/grafana/grafana:latest 23:12:53 Pulling mariadb (nexus3.onap.org:10001/mariadb:10.10.2)... 23:12:53 10.10.2: Pulling from mariadb 23:12:58 Digest: sha256:bfc25a68e113de43d0d112f5a7126df8e278579c3224e3923359e1c1d8d5ce6e 23:12:58 Status: Downloaded newer image for nexus3.onap.org:10001/mariadb:10.10.2 23:12:58 Pulling simulator (nexus3.onap.org:10001/onap/policy-models-simulator:3.1.2-SNAPSHOT)... 23:12:59 3.1.2-SNAPSHOT: Pulling from onap/policy-models-simulator 23:13:03 Digest: sha256:5772a5c551b30d73f901debb8dc38f305559b920e248a9ccb1dba3b880278a13 23:13:03 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-models-simulator:3.1.2-SNAPSHOT 23:13:03 Pulling zookeeper (confluentinc/cp-zookeeper:latest)... 23:13:03 latest: Pulling from confluentinc/cp-zookeeper 23:13:13 Digest: sha256:9babd1c0beaf93189982bdbb9fe4bf194a2730298b640c057817746c19838866 23:13:13 Status: Downloaded newer image for confluentinc/cp-zookeeper:latest 23:13:13 Pulling kafka (confluentinc/cp-kafka:latest)... 23:13:13 latest: Pulling from confluentinc/cp-kafka 23:13:16 Digest: sha256:24cdd3a7fa89d2bed150560ebea81ff1943badfa61e51d66bb541a6b0d7fb047 23:13:16 Status: Downloaded newer image for confluentinc/cp-kafka:latest 23:13:16 Pulling policy-db-migrator (nexus3.onap.org:10001/onap/policy-db-migrator:3.1.2-SNAPSHOT)... 23:13:16 3.1.2-SNAPSHOT: Pulling from onap/policy-db-migrator 23:13:23 Digest: sha256:ed573692302e5a28aa3b51a60adbd7641290e273719edd44bc9ff784d1569efa 23:13:23 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-db-migrator:3.1.2-SNAPSHOT 23:13:23 Pulling api (nexus3.onap.org:10001/onap/policy-api:3.1.2-SNAPSHOT)... 23:13:23 3.1.2-SNAPSHOT: Pulling from onap/policy-api 23:13:25 Digest: sha256:fdc9aa26830be0af882248f5f576f0e9466b8e17ff432e8618d01432efa85803 23:13:25 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-api:3.1.2-SNAPSHOT 23:13:25 Pulling pap (nexus3.onap.org:10001/onap/policy-pap:3.1.2-SNAPSHOT)... 23:13:25 3.1.2-SNAPSHOT: Pulling from onap/policy-pap 23:13:26 Digest: sha256:5e7bdea16830f0dd3e16df519f0efbee38922192c2a79297bcac6699fa44e067 23:13:26 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-pap:3.1.2-SNAPSHOT 23:13:26 Pulling apex-pdp (nexus3.onap.org:10001/onap/policy-apex-pdp:3.1.2-SNAPSHOT)... 23:13:26 3.1.2-SNAPSHOT: Pulling from onap/policy-apex-pdp 23:13:43 Digest: sha256:6150a977631ab72b68f6d8aef4c9bd1e7c9ba8079ef3864510ec09056daa579d 23:13:43 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-apex-pdp:3.1.2-SNAPSHOT 23:13:43 Creating mariadb ... 23:13:43 Creating prometheus ... 23:13:43 Creating simulator ... 23:13:43 Creating compose_zookeeper_1 ... 23:14:02 Creating prometheus ... done 23:14:02 Creating grafana ... 23:14:03 Creating mariadb ... done 23:14:03 Creating policy-db-migrator ... 23:14:04 Creating compose_zookeeper_1 ... done 23:14:04 Creating kafka ... 23:14:05 Creating policy-db-migrator ... done 23:14:05 Creating policy-api ... 23:14:05 Creating simulator ... done 23:14:06 Creating grafana ... done 23:14:08 Creating policy-api ... done 23:14:09 Creating kafka ... done 23:14:09 Creating policy-pap ... 23:14:11 Creating policy-pap ... done 23:14:11 Creating policy-apex-pdp ... 23:14:12 Creating policy-apex-pdp ... done 23:14:13 +++ echo 'Prometheus server: http://localhost:30259' 23:14:13 Prometheus server: http://localhost:30259 23:14:13 +++ echo 'Grafana server: http://localhost:30269' 23:14:13 Grafana server: http://localhost:30269 23:14:13 +++ cd /w/workspace/policy-pap-master-project-csit-pap 23:14:13 ++ sleep 10 23:14:23 ++ unset http_proxy https_proxy 23:14:23 ++ bash /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/wait_for_rest.sh localhost 30003 23:14:23 Waiting for REST to come up on localhost port 30003... 23:14:23 NAMES STATUS 23:14:23 policy-apex-pdp Up 10 seconds 23:14:23 policy-pap Up 11 seconds 23:14:23 policy-api Up 14 seconds 23:14:23 kafka Up 13 seconds 23:14:23 policy-db-migrator Up 18 seconds 23:14:23 grafana Up 16 seconds 23:14:23 compose_zookeeper_1 Up 18 seconds 23:14:23 simulator Up 17 seconds 23:14:23 prometheus Up 21 seconds 23:14:23 mariadb Up 20 seconds 23:14:28 NAMES STATUS 23:14:28 policy-apex-pdp Up 15 seconds 23:14:28 policy-pap Up 16 seconds 23:14:28 policy-api Up 20 seconds 23:14:28 kafka Up 19 seconds 23:14:28 policy-db-migrator Up 23 seconds 23:14:28 grafana Up 21 seconds 23:14:28 compose_zookeeper_1 Up 23 seconds 23:14:28 simulator Up 22 seconds 23:14:28 prometheus Up 26 seconds 23:14:28 mariadb Up 25 seconds 23:14:33 NAMES STATUS 23:14:33 policy-apex-pdp Up 20 seconds 23:14:33 policy-pap Up 21 seconds 23:14:33 policy-api Up 25 seconds 23:14:33 kafka Up 24 seconds 23:14:33 policy-db-migrator Up 28 seconds 23:14:33 grafana Up 26 seconds 23:14:33 compose_zookeeper_1 Up 28 seconds 23:14:33 simulator Up 27 seconds 23:14:33 prometheus Up 31 seconds 23:14:33 mariadb Up 30 seconds 23:14:38 NAMES STATUS 23:14:38 policy-apex-pdp Up 25 seconds 23:14:38 policy-pap Up 27 seconds 23:14:38 policy-api Up 30 seconds 23:14:38 kafka Up 29 seconds 23:14:38 policy-db-migrator Up 33 seconds 23:14:38 grafana Up 31 seconds 23:14:38 compose_zookeeper_1 Up 34 seconds 23:14:38 simulator Up 33 seconds 23:14:38 prometheus Up 36 seconds 23:14:38 mariadb Up 35 seconds 23:14:43 NAMES STATUS 23:14:43 policy-apex-pdp Up 30 seconds 23:14:43 policy-pap Up 32 seconds 23:14:43 policy-api Up 35 seconds 23:14:43 kafka Up 34 seconds 23:14:43 policy-db-migrator Up 38 seconds 23:14:43 grafana Up 36 seconds 23:14:43 compose_zookeeper_1 Up 39 seconds 23:14:43 simulator Up 38 seconds 23:14:43 prometheus Up 41 seconds 23:14:43 mariadb Up 40 seconds 23:14:48 NAMES STATUS 23:14:48 policy-apex-pdp Up 35 seconds 23:14:48 policy-pap Up 37 seconds 23:14:48 policy-api Up 40 seconds 23:14:48 kafka Up 39 seconds 23:14:48 grafana Up 41 seconds 23:14:48 compose_zookeeper_1 Up 44 seconds 23:14:48 simulator Up 43 seconds 23:14:48 prometheus Up 46 seconds 23:14:48 mariadb Up 45 seconds 23:14:53 NAMES STATUS 23:14:53 policy-apex-pdp Up 40 seconds 23:14:53 policy-pap Up 42 seconds 23:14:53 policy-api Up 45 seconds 23:14:53 kafka Up 44 seconds 23:14:53 grafana Up 46 seconds 23:14:53 compose_zookeeper_1 Up 49 seconds 23:14:53 simulator Up 48 seconds 23:14:53 prometheus Up 51 seconds 23:14:53 mariadb Up 50 seconds 23:14:58 NAMES STATUS 23:14:58 policy-apex-pdp Up 45 seconds 23:14:58 policy-pap Up 47 seconds 23:14:58 policy-api Up 50 seconds 23:14:58 kafka Up 49 seconds 23:14:58 grafana Up 51 seconds 23:14:58 compose_zookeeper_1 Up 54 seconds 23:14:58 simulator Up 53 seconds 23:14:58 prometheus Up 56 seconds 23:14:58 mariadb Up 55 seconds 23:15:03 NAMES STATUS 23:15:03 policy-apex-pdp Up 50 seconds 23:15:03 policy-pap Up 52 seconds 23:15:03 policy-api Up 55 seconds 23:15:03 kafka Up 54 seconds 23:15:03 grafana Up 56 seconds 23:15:03 compose_zookeeper_1 Up 59 seconds 23:15:03 simulator Up 58 seconds 23:15:03 prometheus Up About a minute 23:15:03 mariadb Up About a minute 23:15:08 NAMES STATUS 23:15:08 policy-apex-pdp Up 55 seconds 23:15:08 policy-pap Up 57 seconds 23:15:08 policy-api Up About a minute 23:15:08 kafka Up 59 seconds 23:15:08 grafana Up About a minute 23:15:08 compose_zookeeper_1 Up About a minute 23:15:08 simulator Up About a minute 23:15:08 prometheus Up About a minute 23:15:08 mariadb Up About a minute 23:15:13 NAMES STATUS 23:15:13 policy-apex-pdp Up About a minute 23:15:13 policy-pap Up About a minute 23:15:13 policy-api Up About a minute 23:15:13 kafka Up About a minute 23:15:13 grafana Up About a minute 23:15:13 compose_zookeeper_1 Up About a minute 23:15:13 simulator Up About a minute 23:15:13 prometheus Up About a minute 23:15:13 mariadb Up About a minute 23:15:13 ++ export 'SUITES=pap-test.robot 23:15:13 pap-slas.robot' 23:15:13 ++ SUITES='pap-test.robot 23:15:13 pap-slas.robot' 23:15:13 ++ ROBOT_VARIABLES='-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 23:15:13 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates' 23:15:13 + load_set 23:15:13 + _setopts=hxB 23:15:13 ++ echo braceexpand:hashall:interactive-comments:xtrace 23:15:13 ++ tr : ' ' 23:15:13 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:15:13 + set +o braceexpand 23:15:13 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:15:13 + set +o hashall 23:15:13 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:15:13 + set +o interactive-comments 23:15:13 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:15:13 + set +o xtrace 23:15:13 ++ echo hxB 23:15:13 ++ sed 's/./& /g' 23:15:13 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:15:13 + set +h 23:15:13 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:15:13 + set +x 23:15:13 + docker_stats 23:15:13 + tee /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap/_sysinfo-1-after-setup.txt 23:15:13 ++ uname -s 23:15:13 + '[' Linux == Darwin ']' 23:15:13 + sh -c 'top -bn1 | head -3' 23:15:14 top - 23:15:14 up 5 min, 0 users, load average: 3.32, 1.66, 0.68 23:15:14 Tasks: 206 total, 1 running, 131 sleeping, 0 stopped, 0 zombie 23:15:14 %Cpu(s): 12.6 us, 2.6 sy, 0.0 ni, 78.8 id, 5.9 wa, 0.0 hi, 0.1 si, 0.1 st 23:15:14 + echo 23:15:14 23:15:14 + sh -c 'free -h' 23:15:14 total used free shared buff/cache available 23:15:14 Mem: 31G 2.8G 22G 1.3M 6.2G 28G 23:15:14 Swap: 1.0G 0B 1.0G 23:15:14 + echo 23:15:14 + docker ps --format 'table {{ .Names }}\t{{ .Status }}' 23:15:14 23:15:14 NAMES STATUS 23:15:14 policy-apex-pdp Up About a minute 23:15:14 policy-pap Up About a minute 23:15:14 policy-api Up About a minute 23:15:14 kafka Up About a minute 23:15:14 grafana Up About a minute 23:15:14 compose_zookeeper_1 Up About a minute 23:15:14 simulator Up About a minute 23:15:14 prometheus Up About a minute 23:15:14 mariadb Up About a minute 23:15:14 + echo 23:15:14 + docker stats --no-stream 23:15:14 23:15:16 CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 23:15:16 449ae192c5f8 policy-apex-pdp 1.89% 196.6MiB / 31.41GiB 0.61% 8.96kB / 9.08kB 0B / 0B 48 23:15:16 2660d0af5148 policy-pap 23.86% 503.2MiB / 31.41GiB 1.56% 33.2kB / 34.9kB 0B / 153MB 62 23:15:16 2e9139655fcb policy-api 0.16% 584.7MiB / 31.41GiB 1.82% 1MB / 713kB 0B / 0B 54 23:15:16 08384d5ceb43 kafka 33.34% 378.1MiB / 31.41GiB 1.18% 71.5kB / 74.1kB 0B / 508kB 84 23:15:16 f546d0eafbd2 grafana 0.05% 52.66MiB / 31.41GiB 0.16% 19.2kB / 3.51kB 0B / 25.1MB 17 23:15:16 a43e52e74271 compose_zookeeper_1 0.14% 96.9MiB / 31.41GiB 0.30% 56.3kB / 50.2kB 0B / 348kB 60 23:15:16 a0dad09ee093 simulator 0.08% 126.5MiB / 31.41GiB 0.39% 1.38kB / 0B 0B / 0B 77 23:15:16 0f6fde4a13bf prometheus 0.00% 18.89MiB / 31.41GiB 0.06% 2.16kB / 474B 4.1kB / 0B 13 23:15:16 9964de5b37fd mariadb 0.02% 102.2MiB / 31.41GiB 0.32% 1MB / 1.2MB 10.8MB / 68MB 38 23:15:16 + echo 23:15:16 23:15:16 + cd /tmp/tmp.esLYy6xppR 23:15:16 + echo 'Reading the testplan:' 23:15:16 Reading the testplan: 23:15:16 + echo 'pap-test.robot 23:15:16 + egrep -v '(^[[:space:]]*#|^[[:space:]]*$)' 23:15:16 pap-slas.robot' 23:15:16 + sed 's|^|/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/|' 23:15:16 + cat testplan.txt 23:15:16 /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot 23:15:16 /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot 23:15:16 ++ xargs 23:15:16 + SUITES='/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot' 23:15:16 + echo 'ROBOT_VARIABLES=-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 23:15:16 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates' 23:15:16 ROBOT_VARIABLES=-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 23:15:16 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates 23:15:16 + echo 'Starting Robot test suites /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot ...' 23:15:16 Starting Robot test suites /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot ... 23:15:16 + relax_set 23:15:16 + set +e 23:15:16 + set +o pipefail 23:15:16 + python3 -m robot.run -N pap -v WORKSPACE:/tmp -v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot 23:15:16 ============================================================================== 23:15:16 pap 23:15:16 ============================================================================== 23:15:17 pap.Pap-Test 23:15:17 ============================================================================== 23:15:17 LoadPolicy :: Create a policy named 'onap.restart.tca' and version... | PASS | 23:15:17 ------------------------------------------------------------------------------ 23:15:18 LoadPolicyWithMetadataSet :: Create a policy named 'operational.ap... | PASS | 23:15:18 ------------------------------------------------------------------------------ 23:15:19 LoadNodeTemplates :: Create node templates in database using speci... | PASS | 23:15:19 ------------------------------------------------------------------------------ 23:15:19 Healthcheck :: Verify policy pap health check | PASS | 23:15:19 ------------------------------------------------------------------------------ 23:15:39 Consolidated Healthcheck :: Verify policy consolidated health check | PASS | 23:15:39 ------------------------------------------------------------------------------ 23:15:40 Metrics :: Verify policy pap is exporting prometheus metrics | PASS | 23:15:40 ------------------------------------------------------------------------------ 23:15:40 AddPdpGroup :: Add a new PdpGroup named 'testGroup' in the policy ... | PASS | 23:15:40 ------------------------------------------------------------------------------ 23:15:40 QueryPdpGroupsBeforeActivation :: Verify PdpGroups before activation | PASS | 23:15:40 ------------------------------------------------------------------------------ 23:15:41 ActivatePdpGroup :: Change the state of PdpGroup named 'testGroup'... | PASS | 23:15:41 ------------------------------------------------------------------------------ 23:15:41 QueryPdpGroupsAfterActivation :: Verify PdpGroups after activation | PASS | 23:15:41 ------------------------------------------------------------------------------ 23:15:41 DeployPdpGroups :: Deploy policies in PdpGroups | PASS | 23:15:41 ------------------------------------------------------------------------------ 23:15:41 QueryPdpGroupsAfterDeploy :: Verify PdpGroups after undeploy | PASS | 23:15:41 ------------------------------------------------------------------------------ 23:15:42 QueryPolicyAuditAfterDeploy :: Verify policy audit record after de... | PASS | 23:15:42 ------------------------------------------------------------------------------ 23:15:42 QueryPolicyAuditWithMetadataSetAfterDeploy :: Verify policy audit ... | PASS | 23:15:42 ------------------------------------------------------------------------------ 23:15:42 UndeployPolicy :: Undeploy a policy named 'onap.restart.tca' from ... | PASS | 23:15:42 ------------------------------------------------------------------------------ 23:15:42 UndeployPolicyWithMetadataSet :: Undeploy a policy named 'operatio... | PASS | 23:15:42 ------------------------------------------------------------------------------ 23:15:43 QueryPdpGroupsAfterUndeploy :: Verify PdpGroups after undeploy | PASS | 23:15:43 ------------------------------------------------------------------------------ 23:16:03 QueryPolicyAuditAfterUnDeploy :: Verify policy audit record after ... | PASS | 23:16:03 ------------------------------------------------------------------------------ 23:16:03 QueryPolicyAuditWithMetadataSetAfterUnDeploy :: Verify policy audi... | PASS | 23:16:03 ------------------------------------------------------------------------------ 23:16:03 DeactivatePdpGroup :: Change the state of PdpGroup named 'testGrou... | PASS | 23:16:03 ------------------------------------------------------------------------------ 23:16:03 DeletePdpGroups :: Delete the PdpGroup named 'testGroup' from poli... | PASS | 23:16:03 ------------------------------------------------------------------------------ 23:16:03 QueryPdpGroupsAfterDelete :: Verify PdpGroups after delete | PASS | 23:16:03 ------------------------------------------------------------------------------ 23:16:03 pap.Pap-Test | PASS | 23:16:03 22 tests, 22 passed, 0 failed 23:16:03 ============================================================================== 23:16:03 pap.Pap-Slas 23:16:03 ============================================================================== 23:17:03 WaitForPrometheusServer :: Wait for Prometheus server to gather al... | PASS | 23:17:03 ------------------------------------------------------------------------------ 23:17:04 ValidateResponseTimeForHealthcheck :: Validate component healthche... | PASS | 23:17:04 ------------------------------------------------------------------------------ 23:17:04 ValidateResponseTimeForSystemHealthcheck :: Validate if system hea... | PASS | 23:17:04 ------------------------------------------------------------------------------ 23:17:04 ValidateResponseTimeQueryPolicyAudit :: Validate query audits resp... | PASS | 23:17:04 ------------------------------------------------------------------------------ 23:17:04 ValidateResponseTimeUpdateGroup :: Validate pdps/group response time | PASS | 23:17:04 ------------------------------------------------------------------------------ 23:17:04 ValidatePolicyDeploymentTime :: Check if deployment of policy is u... | PASS | 23:17:04 ------------------------------------------------------------------------------ 23:17:04 ValidateResponseTimeDeletePolicy :: Check if undeployment of polic... | PASS | 23:17:04 ------------------------------------------------------------------------------ 23:17:04 ValidateResponseTimeDeleteGroup :: Validate delete group response ... | PASS | 23:17:04 ------------------------------------------------------------------------------ 23:17:04 pap.Pap-Slas | PASS | 23:17:04 8 tests, 8 passed, 0 failed 23:17:04 ============================================================================== 23:17:04 pap | PASS | 23:17:04 30 tests, 30 passed, 0 failed 23:17:04 ============================================================================== 23:17:04 Output: /tmp/tmp.esLYy6xppR/output.xml 23:17:04 Log: /tmp/tmp.esLYy6xppR/log.html 23:17:04 Report: /tmp/tmp.esLYy6xppR/report.html 23:17:04 + RESULT=0 23:17:04 + load_set 23:17:04 + _setopts=hxB 23:17:04 ++ echo braceexpand:hashall:interactive-comments:xtrace 23:17:04 ++ tr : ' ' 23:17:04 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:17:04 + set +o braceexpand 23:17:04 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:17:04 + set +o hashall 23:17:04 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:17:04 + set +o interactive-comments 23:17:04 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:17:04 + set +o xtrace 23:17:04 ++ echo hxB 23:17:04 ++ sed 's/./& /g' 23:17:04 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:17:04 + set +h 23:17:04 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:17:04 + set +x 23:17:04 + echo 'RESULT: 0' 23:17:04 RESULT: 0 23:17:04 + exit 0 23:17:04 + on_exit 23:17:04 + rc=0 23:17:04 + [[ -n /w/workspace/policy-pap-master-project-csit-pap ]] 23:17:04 + docker ps --format 'table {{ .Names }}\t{{ .Status }}' 23:17:04 NAMES STATUS 23:17:04 policy-apex-pdp Up 2 minutes 23:17:04 policy-pap Up 2 minutes 23:17:04 policy-api Up 2 minutes 23:17:04 kafka Up 2 minutes 23:17:04 grafana Up 2 minutes 23:17:04 compose_zookeeper_1 Up 2 minutes 23:17:04 simulator Up 2 minutes 23:17:04 prometheus Up 3 minutes 23:17:04 mariadb Up 3 minutes 23:17:04 + docker_stats 23:17:04 ++ uname -s 23:17:04 + '[' Linux == Darwin ']' 23:17:04 + sh -c 'top -bn1 | head -3' 23:17:04 top - 23:17:04 up 6 min, 0 users, load average: 0.95, 1.41, 0.71 23:17:04 Tasks: 196 total, 1 running, 129 sleeping, 0 stopped, 0 zombie 23:17:04 %Cpu(s): 10.4 us, 2.0 sy, 0.0 ni, 83.0 id, 4.5 wa, 0.0 hi, 0.1 si, 0.1 st 23:17:04 + echo 23:17:04 23:17:04 + sh -c 'free -h' 23:17:04 total used free shared buff/cache available 23:17:04 Mem: 31G 2.8G 22G 1.3M 6.2G 28G 23:17:04 Swap: 1.0G 0B 1.0G 23:17:04 + echo 23:17:04 23:17:04 + docker ps --format 'table {{ .Names }}\t{{ .Status }}' 23:17:04 NAMES STATUS 23:17:04 policy-apex-pdp Up 2 minutes 23:17:04 policy-pap Up 2 minutes 23:17:04 policy-api Up 2 minutes 23:17:04 kafka Up 2 minutes 23:17:04 grafana Up 2 minutes 23:17:04 compose_zookeeper_1 Up 3 minutes 23:17:04 simulator Up 2 minutes 23:17:04 prometheus Up 3 minutes 23:17:04 mariadb Up 3 minutes 23:17:04 + echo 23:17:04 23:17:04 + docker stats --no-stream 23:17:07 CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 23:17:07 449ae192c5f8 policy-apex-pdp 0.27% 187.9MiB / 31.41GiB 0.58% 58.7kB / 93.5kB 0B / 0B 52 23:17:07 2660d0af5148 policy-pap 0.67% 497.6MiB / 31.41GiB 1.55% 2.34MB / 818kB 0B / 153MB 66 23:17:07 2e9139655fcb policy-api 0.14% 590.7MiB / 31.41GiB 1.84% 2.49MB / 1.27MB 0B / 0B 57 23:17:07 08384d5ceb43 kafka 10.98% 390.8MiB / 31.41GiB 1.21% 241kB / 216kB 0B / 614kB 85 23:17:07 f546d0eafbd2 grafana 0.04% 59.45MiB / 31.41GiB 0.18% 20.1kB / 4.54kB 0B / 25.1MB 17 23:17:07 a43e52e74271 compose_zookeeper_1 0.09% 97.97MiB / 31.41GiB 0.30% 59.1kB / 51.8kB 0B / 348kB 60 23:17:07 a0dad09ee093 simulator 0.11% 126.6MiB / 31.41GiB 0.39% 1.54kB / 0B 0B / 0B 78 23:17:07 0f6fde4a13bf prometheus 0.23% 25.2MiB / 31.41GiB 0.08% 189kB / 11.1kB 4.1kB / 0B 13 23:17:07 9964de5b37fd mariadb 0.03% 103.6MiB / 31.41GiB 0.32% 1.96MB / 4.78MB 10.8MB / 68.4MB 27 23:17:07 + echo 23:17:07 23:17:07 + source_safely /w/workspace/policy-pap-master-project-csit-pap/compose/stop-compose.sh 23:17:07 + '[' -z /w/workspace/policy-pap-master-project-csit-pap/compose/stop-compose.sh ']' 23:17:07 + relax_set 23:17:07 + set +e 23:17:07 + set +o pipefail 23:17:07 + . /w/workspace/policy-pap-master-project-csit-pap/compose/stop-compose.sh 23:17:07 ++ echo 'Shut down started!' 23:17:07 Shut down started! 23:17:07 ++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' 23:17:07 ++ COMPOSE_FOLDER=/w/workspace/policy-pap-master-project-csit-pap/compose 23:17:07 ++ cd /w/workspace/policy-pap-master-project-csit-pap/compose 23:17:07 ++ source export-ports.sh 23:17:07 ++ source get-versions.sh 23:17:09 ++ echo 'Collecting logs from docker compose containers...' 23:17:09 Collecting logs from docker compose containers... 23:17:09 ++ docker-compose logs 23:17:10 ++ cat docker_compose.log 23:17:10 Attaching to policy-apex-pdp, policy-pap, policy-api, kafka, policy-db-migrator, grafana, compose_zookeeper_1, simulator, prometheus, mariadb 23:17:10 zookeeper_1 | ===> User 23:17:10 zookeeper_1 | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) 23:17:10 zookeeper_1 | ===> Configuring ... 23:17:10 zookeeper_1 | ===> Running preflight checks ... 23:17:10 zookeeper_1 | ===> Check if /var/lib/zookeeper/data is writable ... 23:17:10 zookeeper_1 | ===> Check if /var/lib/zookeeper/log is writable ... 23:17:10 zookeeper_1 | ===> Launching ... 23:17:10 zookeeper_1 | ===> Launching zookeeper ... 23:17:10 zookeeper_1 | [2024-03-09 23:14:08,182] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:17:10 zookeeper_1 | [2024-03-09 23:14:08,188] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:17:10 zookeeper_1 | [2024-03-09 23:14:08,188] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:17:10 zookeeper_1 | [2024-03-09 23:14:08,188] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:17:10 zookeeper_1 | [2024-03-09 23:14:08,188] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:17:10 zookeeper_1 | [2024-03-09 23:14:08,190] INFO autopurge.snapRetainCount set to 3 (org.apache.zookeeper.server.DatadirCleanupManager) 23:17:10 zookeeper_1 | [2024-03-09 23:14:08,190] INFO autopurge.purgeInterval set to 0 (org.apache.zookeeper.server.DatadirCleanupManager) 23:17:10 zookeeper_1 | [2024-03-09 23:14:08,190] INFO Purge task is not scheduled. (org.apache.zookeeper.server.DatadirCleanupManager) 23:17:10 zookeeper_1 | [2024-03-09 23:14:08,190] WARN Either no config or no quorum defined in config, running in standalone mode (org.apache.zookeeper.server.quorum.QuorumPeerMain) 23:17:10 zookeeper_1 | [2024-03-09 23:14:08,191] INFO Log4j 1.2 jmx support not found; jmx disabled. (org.apache.zookeeper.jmx.ManagedUtil) 23:17:10 zookeeper_1 | [2024-03-09 23:14:08,191] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:17:10 zookeeper_1 | [2024-03-09 23:14:08,192] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:17:10 zookeeper_1 | [2024-03-09 23:14:08,192] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:17:10 zookeeper_1 | [2024-03-09 23:14:08,192] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:17:10 zookeeper_1 | [2024-03-09 23:14:08,192] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:17:10 zookeeper_1 | [2024-03-09 23:14:08,192] INFO Starting server (org.apache.zookeeper.server.ZooKeeperServerMain) 23:17:10 zookeeper_1 | [2024-03-09 23:14:08,202] INFO ServerMetrics initialized with provider org.apache.zookeeper.metrics.impl.DefaultMetricsProvider@26275bef (org.apache.zookeeper.server.ServerMetrics) 23:17:10 zookeeper_1 | [2024-03-09 23:14:08,204] INFO ACL digest algorithm is: SHA1 (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) 23:17:10 zookeeper_1 | [2024-03-09 23:14:08,205] INFO zookeeper.DigestAuthenticationProvider.enabled = true (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) 23:17:10 zookeeper_1 | [2024-03-09 23:14:08,207] INFO zookeeper.snapshot.trust.empty : false (org.apache.zookeeper.server.persistence.FileTxnSnapLog) 23:17:10 zookeeper_1 | [2024-03-09 23:14:08,220] INFO (org.apache.zookeeper.server.ZooKeeperServer) 23:17:10 zookeeper_1 | [2024-03-09 23:14:08,220] INFO ______ _ (org.apache.zookeeper.server.ZooKeeperServer) 23:17:10 zookeeper_1 | [2024-03-09 23:14:08,220] INFO |___ / | | (org.apache.zookeeper.server.ZooKeeperServer) 23:17:10 zookeeper_1 | [2024-03-09 23:14:08,220] INFO / / ___ ___ | | __ ___ ___ _ __ ___ _ __ (org.apache.zookeeper.server.ZooKeeperServer) 23:17:10 zookeeper_1 | [2024-03-09 23:14:08,220] INFO / / / _ \ / _ \ | |/ / / _ \ / _ \ | '_ \ / _ \ | '__| (org.apache.zookeeper.server.ZooKeeperServer) 23:17:10 zookeeper_1 | [2024-03-09 23:14:08,220] INFO / /__ | (_) | | (_) | | < | __/ | __/ | |_) | | __/ | | (org.apache.zookeeper.server.ZooKeeperServer) 23:17:10 zookeeper_1 | [2024-03-09 23:14:08,220] INFO /_____| \___/ \___/ |_|\_\ \___| \___| | .__/ \___| |_| (org.apache.zookeeper.server.ZooKeeperServer) 23:17:10 zookeeper_1 | [2024-03-09 23:14:08,220] INFO | | (org.apache.zookeeper.server.ZooKeeperServer) 23:17:10 zookeeper_1 | [2024-03-09 23:14:08,220] INFO |_| (org.apache.zookeeper.server.ZooKeeperServer) 23:17:10 zookeeper_1 | [2024-03-09 23:14:08,220] INFO (org.apache.zookeeper.server.ZooKeeperServer) 23:17:10 zookeeper_1 | [2024-03-09 23:14:08,223] INFO Server environment:zookeeper.version=3.8.3-6ad6d364c7c0bcf0de452d54ebefa3058098ab56, built on 2023-10-05 10:34 UTC (org.apache.zookeeper.server.ZooKeeperServer) 23:17:10 zookeeper_1 | [2024-03-09 23:14:08,223] INFO Server environment:host.name=a43e52e74271 (org.apache.zookeeper.server.ZooKeeperServer) 23:17:10 grafana | logger=settings t=2024-03-09T23:14:06.870014712Z level=info msg="Starting Grafana" version=10.4.0 commit=03f502a94d17f7dc4e6c34acdf8428aedd986e4c branch=HEAD compiled=2024-03-09T23:14:06Z 23:17:10 grafana | logger=settings t=2024-03-09T23:14:06.870325498Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini 23:17:10 grafana | logger=settings t=2024-03-09T23:14:06.870342279Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini 23:17:10 grafana | logger=settings t=2024-03-09T23:14:06.870348489Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana" 23:17:10 grafana | logger=settings t=2024-03-09T23:14:06.870351979Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana" 23:17:10 grafana | logger=settings t=2024-03-09T23:14:06.870355169Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins" 23:17:10 grafana | logger=settings t=2024-03-09T23:14:06.870357839Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning" 23:17:10 grafana | logger=settings t=2024-03-09T23:14:06.870360499Z level=info msg="Config overridden from command line" arg="default.log.mode=console" 23:17:10 grafana | logger=settings t=2024-03-09T23:14:06.870391529Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana" 23:17:10 grafana | logger=settings t=2024-03-09T23:14:06.8703952Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana" 23:17:10 grafana | logger=settings t=2024-03-09T23:14:06.87039808Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins" 23:17:10 grafana | logger=settings t=2024-03-09T23:14:06.87040096Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning" 23:17:10 grafana | logger=settings t=2024-03-09T23:14:06.87040537Z level=info msg=Target target=[all] 23:17:10 grafana | logger=settings t=2024-03-09T23:14:06.87041143Z level=info msg="Path Home" path=/usr/share/grafana 23:17:10 grafana | logger=settings t=2024-03-09T23:14:06.87041433Z level=info msg="Path Data" path=/var/lib/grafana 23:17:10 grafana | logger=settings t=2024-03-09T23:14:06.87041699Z level=info msg="Path Logs" path=/var/log/grafana 23:17:10 grafana | logger=settings t=2024-03-09T23:14:06.87041954Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins 23:17:10 grafana | logger=settings t=2024-03-09T23:14:06.87042345Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning 23:17:10 grafana | logger=settings t=2024-03-09T23:14:06.87042657Z level=info msg="App mode production" 23:17:10 grafana | logger=sqlstore t=2024-03-09T23:14:06.870784267Z level=info msg="Connecting to DB" dbtype=sqlite3 23:17:10 grafana | logger=sqlstore t=2024-03-09T23:14:06.870807627Z level=info msg="Creating SQLite database file" path=/var/lib/grafana/grafana.db 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:06.871469369Z level=info msg="Starting DB migrations" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:06.872349596Z level=info msg="Executing migration" id="create migration_log table" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:06.873216812Z level=info msg="Migration successfully executed" id="create migration_log table" duration=867.046µs 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:06.981254806Z level=info msg="Executing migration" id="create user table" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:06.982881706Z level=info msg="Migration successfully executed" id="create user table" duration=1.62613ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:07.02969307Z level=info msg="Executing migration" id="add unique index user.login" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:07.030952781Z level=info msg="Migration successfully executed" id="add unique index user.login" duration=1.259401ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:07.138488216Z level=info msg="Executing migration" id="add unique index user.email" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:07.140186594Z level=info msg="Migration successfully executed" id="add unique index user.email" duration=1.686908ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:07.233216718Z level=info msg="Executing migration" id="drop index UQE_user_login - v1" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:07.234904916Z level=info msg="Migration successfully executed" id="drop index UQE_user_login - v1" duration=1.708648ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:07.40499938Z level=info msg="Executing migration" id="drop index UQE_user_email - v1" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:07.409023446Z level=info msg="Migration successfully executed" id="drop index UQE_user_email - v1" duration=4.016736ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:07.487136563Z level=info msg="Executing migration" id="Rename table user to user_v1 - v1" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:07.489190847Z level=info msg="Migration successfully executed" id="Rename table user to user_v1 - v1" duration=2.056854ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:07.660950238Z level=info msg="Executing migration" id="create user table v2" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:07.662346521Z level=info msg="Migration successfully executed" id="create user table v2" duration=1.397463ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:07.735284702Z level=info msg="Executing migration" id="create index UQE_user_login - v2" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:07.736433251Z level=info msg="Migration successfully executed" id="create index UQE_user_login - v2" duration=1.148909ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:07.743759182Z level=info msg="Executing migration" id="create index UQE_user_email - v2" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:07.744876211Z level=info msg="Migration successfully executed" id="create index UQE_user_email - v2" duration=1.117549ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:07.749893104Z level=info msg="Executing migration" id="copy data_source v1 to v2" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:07.750540235Z level=info msg="Migration successfully executed" id="copy data_source v1 to v2" duration=646.941µs 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:07.755513677Z level=info msg="Executing migration" id="Drop old table user_v1" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:07.756303881Z level=info msg="Migration successfully executed" id="Drop old table user_v1" duration=790.174µs 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:07.761573618Z level=info msg="Executing migration" id="Add column help_flags1 to user table" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:07.762642366Z level=info msg="Migration successfully executed" id="Add column help_flags1 to user table" duration=1.069758ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:07.766556151Z level=info msg="Executing migration" id="Update user table charset" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:07.766597181Z level=info msg="Migration successfully executed" id="Update user table charset" duration=42.02µs 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:07.792465961Z level=info msg="Executing migration" id="Add last_seen_at column to user" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:07.794111598Z level=info msg="Migration successfully executed" id="Add last_seen_at column to user" duration=1.651797ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:07.824614884Z level=info msg="Executing migration" id="Add missing user data" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:07.824993341Z level=info msg="Migration successfully executed" id="Add missing user data" duration=377.257µs 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:07.870560518Z level=info msg="Executing migration" id="Add is_disabled column to user" 23:17:10 zookeeper_1 | [2024-03-09 23:14:08,223] INFO Server environment:java.version=11.0.21 (org.apache.zookeeper.server.ZooKeeperServer) 23:17:10 zookeeper_1 | [2024-03-09 23:14:08,223] INFO Server environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.server.ZooKeeperServer) 23:17:10 zookeeper_1 | [2024-03-09 23:14:08,223] INFO Server environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.server.ZooKeeperServer) 23:17:10 kafka | ===> User 23:17:10 kafka | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) 23:17:10 kafka | ===> Configuring ... 23:17:10 kafka | Running in Zookeeper mode... 23:17:10 kafka | ===> Running preflight checks ... 23:17:10 kafka | ===> Check if /var/lib/kafka/data is writable ... 23:17:10 kafka | ===> Check if Zookeeper is healthy ... 23:17:10 kafka | SLF4J: Class path contains multiple SLF4J bindings. 23:17:10 kafka | SLF4J: Found binding in [jar:file:/usr/share/java/kafka/slf4j-reload4j-1.7.36.jar!/org/slf4j/impl/StaticLoggerBinder.class] 23:17:10 kafka | SLF4J: Found binding in [jar:file:/usr/share/java/cp-base-new/slf4j-reload4j-1.7.36.jar!/org/slf4j/impl/StaticLoggerBinder.class] 23:17:10 kafka | SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. 23:17:10 kafka | SLF4J: Actual binding is of type [org.slf4j.impl.Reload4jLoggerFactory] 23:17:10 kafka | [2024-03-09 23:14:13,919] INFO Client environment:zookeeper.version=3.8.3-6ad6d364c7c0bcf0de452d54ebefa3058098ab56, built on 2023-10-05 10:34 UTC (org.apache.zookeeper.ZooKeeper) 23:17:10 kafka | [2024-03-09 23:14:13,920] INFO Client environment:host.name=08384d5ceb43 (org.apache.zookeeper.ZooKeeper) 23:17:10 kafka | [2024-03-09 23:14:13,920] INFO Client environment:java.version=11.0.21 (org.apache.zookeeper.ZooKeeper) 23:17:10 kafka | [2024-03-09 23:14:13,920] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) 23:17:10 kafka | [2024-03-09 23:14:13,920] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:07.872692843Z level=info msg="Migration successfully executed" id="Add is_disabled column to user" duration=2.131875ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:07.886808317Z level=info msg="Executing migration" id="Add index user.login/user.email" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:07.887732862Z level=info msg="Migration successfully executed" id="Add index user.login/user.email" duration=927.185µs 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:08.026716678Z level=info msg="Executing migration" id="Add is_service_account column to user" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:08.028785502Z level=info msg="Migration successfully executed" id="Add is_service_account column to user" duration=2.072094ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:08.055036652Z level=info msg="Executing migration" id="Update is_service_account column to nullable" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:08.065796802Z level=info msg="Migration successfully executed" id="Update is_service_account column to nullable" duration=10.76204ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:08.12898996Z level=info msg="Executing migration" id="Add uid column to user" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:08.131346821Z level=info msg="Migration successfully executed" id="Add uid column to user" duration=2.36587ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:08.217943338Z level=info msg="Executing migration" id="Update uid column values for users" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:08.218430219Z level=info msg="Migration successfully executed" id="Update uid column values for users" duration=492.171µs 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:08.294997371Z level=info msg="Executing migration" id="Add unique index user_uid" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:08.296604245Z level=info msg="Migration successfully executed" id="Add unique index user_uid" duration=1.608855ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:08.357298187Z level=info msg="Executing migration" id="create temp user table v1-7" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:08.358224337Z level=info msg="Migration successfully executed" id="create temp user table v1-7" duration=928.76µs 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:08.497528355Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v1-7" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:08.498601378Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v1-7" duration=1.075683ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:08.66649228Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v1-7" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:08.668202857Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v1-7" duration=1.710797ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:08.715769878Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v1-7" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:08.717023895Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v1-7" duration=1.258097ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:08.787347234Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v1-7" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:08.788696343Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v1-7" duration=1.350259ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:08.825196606Z level=info msg="Executing migration" id="Update temp_user table charset" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:08.825240437Z level=info msg="Migration successfully executed" id="Update temp_user table charset" duration=46.041µs 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:08.851702804Z level=info msg="Executing migration" id="drop index IDX_temp_user_email - v1" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:08.85290835Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_email - v1" duration=1.198226ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:08.890096708Z level=info msg="Executing migration" id="drop index IDX_temp_user_org_id - v1" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:08.891383595Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_org_id - v1" duration=1.287587ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:08.994295254Z level=info msg="Executing migration" id="drop index IDX_temp_user_code - v1" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:08.995719014Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_code - v1" duration=1.428351ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:09.013196005Z level=info msg="Executing migration" id="drop index IDX_temp_user_status - v1" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:09.015690982Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_status - v1" duration=3.122238ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:09.059428799Z level=info msg="Executing migration" id="Rename table temp_user to temp_user_tmp_qwerty - v1" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:09.065664644Z level=info msg="Migration successfully executed" id="Rename table temp_user to temp_user_tmp_qwerty - v1" duration=6.235715ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:09.139639263Z level=info msg="Executing migration" id="create temp_user v2" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:09.141022758Z level=info msg="Migration successfully executed" id="create temp_user v2" duration=1.386626ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:09.245872874Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v2" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:09.247939732Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v2" duration=2.066788ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:09.691296068Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v2" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:09.692998969Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v2" duration=1.708442ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:09.720515895Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v2" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:09.722310067Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v2" duration=1.799713ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:09.84760805Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v2" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:09.848645619Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v2" duration=1.027878ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:09.910172269Z level=info msg="Executing migration" id="copy temp_user v1 to v2" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:09.910869392Z level=info msg="Migration successfully executed" id="copy temp_user v1 to v2" duration=689.233µs 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:10.003380872Z level=info msg="Executing migration" id="drop temp_user_tmp_qwerty" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:10.004340059Z level=info msg="Migration successfully executed" id="drop temp_user_tmp_qwerty" duration=959.427µs 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:10.064118255Z level=info msg="Executing migration" id="Set created for temp users that will otherwise prematurely expire" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:10.064800977Z level=info msg="Migration successfully executed" id="Set created for temp users that will otherwise prematurely expire" duration=683.442µs 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:10.191870664Z level=info msg="Executing migration" id="create star table" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:10.193504851Z level=info msg="Migration successfully executed" id="create star table" duration=1.643528ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:10.227166009Z level=info msg="Executing migration" id="add unique index star.user_id_dashboard_id" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:10.22843795Z level=info msg="Migration successfully executed" id="add unique index star.user_id_dashboard_id" duration=1.271351ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:10.349880504Z level=info msg="Executing migration" id="create org table v1" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:10.35203688Z level=info msg="Migration successfully executed" id="create org table v1" duration=2.161486ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:10.426721658Z level=info msg="Executing migration" id="create index UQE_org_name - v1" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:10.427988879Z level=info msg="Migration successfully executed" id="create index UQE_org_name - v1" duration=1.266951ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:10.475737482Z level=info msg="Executing migration" id="create org_user table v1" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:10.478485497Z level=info msg="Migration successfully executed" id="create org_user table v1" duration=2.747705ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:10.548284685Z level=info msg="Executing migration" id="create index IDX_org_user_org_id - v1" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:10.549427514Z level=info msg="Migration successfully executed" id="create index IDX_org_user_org_id - v1" duration=1.14491ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:10.662079192Z level=info msg="Executing migration" id="create index UQE_org_user_org_id_user_id - v1" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:10.665154843Z level=info msg="Migration successfully executed" id="create index UQE_org_user_org_id_user_id - v1" duration=3.080441ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:10.69875347Z level=info msg="Executing migration" id="create index IDX_org_user_user_id - v1" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:10.700396697Z level=info msg="Migration successfully executed" id="create index IDX_org_user_user_id - v1" duration=1.643807ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:10.78073256Z level=info msg="Executing migration" id="Update org table charset" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:10.780796411Z level=info msg="Migration successfully executed" id="Update org table charset" duration=69.091µs 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:10.921541665Z level=info msg="Executing migration" id="Update org_user table charset" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:10.921607436Z level=info msg="Migration successfully executed" id="Update org_user table charset" duration=70.771µs 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:11.005307354Z level=info msg="Executing migration" id="Migrate all Read Only Viewers to Viewers" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:11.005937724Z level=info msg="Migration successfully executed" id="Migrate all Read Only Viewers to Viewers" duration=636.5µs 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:11.08215523Z level=info msg="Executing migration" id="create dashboard table" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:11.084494302Z level=info msg="Migration successfully executed" id="create dashboard table" duration=2.338282ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:11.162685385Z level=info msg="Executing migration" id="add index dashboard.account_id" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:11.164468618Z level=info msg="Migration successfully executed" id="add index dashboard.account_id" duration=1.789133ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:11.225263332Z level=info msg="Executing migration" id="add unique index dashboard_account_id_slug" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:11.23056862Z level=info msg="Migration successfully executed" id="add unique index dashboard_account_id_slug" duration=5.299418ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:11.319674453Z level=info msg="Executing migration" id="create dashboard_tag table" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:11.327309763Z level=info msg="Migration successfully executed" id="create dashboard_tag table" duration=7.6396ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:11.505725722Z level=info msg="Executing migration" id="add unique index dashboard_tag.dasboard_id_term" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:11.507071577Z level=info msg="Migration successfully executed" id="add unique index dashboard_tag.dasboard_id_term" duration=1.347975ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:11.588658323Z level=info msg="Executing migration" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:11.589185222Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" duration=527.519µs 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:11.662964354Z level=info msg="Executing migration" id="Rename table dashboard to dashboard_v1 - v1" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:11.66984934Z level=info msg="Migration successfully executed" id="Rename table dashboard to dashboard_v1 - v1" duration=6.888856ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:11.710505256Z level=info msg="Executing migration" id="create dashboard v2" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:11.711752348Z level=info msg="Migration successfully executed" id="create dashboard v2" duration=1.248392ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:11.7795275Z level=info msg="Executing migration" id="create index IDX_dashboard_org_id - v2" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:11.780873775Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_org_id - v2" duration=1.347335ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:11.847678899Z level=info msg="Executing migration" id="create index UQE_dashboard_org_id_slug - v2" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:11.8493876Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_org_id_slug - v2" duration=1.725582ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:11.859972834Z level=info msg="Executing migration" id="copy dashboard v1 to v2" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:11.860372951Z level=info msg="Migration successfully executed" id="copy dashboard v1 to v2" duration=399.927µs 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:11.865383933Z level=info msg="Executing migration" id="drop table dashboard_v1" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:11.86630358Z level=info msg="Migration successfully executed" id="drop table dashboard_v1" duration=917.777µs 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:11.973059516Z level=info msg="Executing migration" id="alter dashboard.data to mediumtext v1" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:11.973294731Z level=info msg="Migration successfully executed" id="alter dashboard.data to mediumtext v1" duration=236.755µs 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:12.14469168Z level=info msg="Executing migration" id="Add column updated_by in dashboard - v2" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:12.148365942Z level=info msg="Migration successfully executed" id="Add column updated_by in dashboard - v2" duration=3.676032ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:12.205001556Z level=info msg="Executing migration" id="Add column created_by in dashboard - v2" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:12.209059646Z level=info msg="Migration successfully executed" id="Add column created_by in dashboard - v2" duration=4.05866ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:12.24534991Z level=info msg="Executing migration" id="Add column gnetId in dashboard" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:12.2473903Z level=info msg="Migration successfully executed" id="Add column gnetId in dashboard" duration=2.04456ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:12.25500957Z level=info msg="Executing migration" id="Add index for gnetId in dashboard" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:12.255723574Z level=info msg="Migration successfully executed" id="Add index for gnetId in dashboard" duration=713.574µs 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:12.264415625Z level=info msg="Executing migration" id="Add column plugin_id in dashboard" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:12.27229976Z level=info msg="Migration successfully executed" id="Add column plugin_id in dashboard" duration=7.883865ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:12.278811648Z level=info msg="Executing migration" id="Add index for plugin_id in dashboard" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:12.280269997Z level=info msg="Migration successfully executed" id="Add index for plugin_id in dashboard" duration=1.461458ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:12.352281243Z level=info msg="Executing migration" id="Add index for dashboard_id in dashboard_tag" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:12.35364959Z level=info msg="Migration successfully executed" id="Add index for dashboard_id in dashboard_tag" duration=1.366377ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:12.367291068Z level=info msg="Executing migration" id="Update dashboard table charset" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:12.367429501Z level=info msg="Migration successfully executed" id="Update dashboard table charset" duration=136.113µs 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:12.452000114Z level=info msg="Executing migration" id="Update dashboard_tag table charset" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:12.452100106Z level=info msg="Migration successfully executed" id="Update dashboard_tag table charset" duration=96.702µs 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:12.530902856Z level=info msg="Executing migration" id="Add column folder_id in dashboard" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:12.535001027Z level=info msg="Migration successfully executed" id="Add column folder_id in dashboard" duration=4.096701ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:12.626716191Z level=info msg="Executing migration" id="Add column isFolder in dashboard" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:12.629984675Z level=info msg="Migration successfully executed" id="Add column isFolder in dashboard" duration=3.271764ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:12.68311511Z level=info msg="Executing migration" id="Add column has_acl in dashboard" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:12.684933796Z level=info msg="Migration successfully executed" id="Add column has_acl in dashboard" duration=1.819996ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:12.692924333Z level=info msg="Executing migration" id="Add column uid in dashboard" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:12.698130946Z level=info msg="Migration successfully executed" id="Add column uid in dashboard" duration=5.190992ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:12.778851513Z level=info msg="Executing migration" id="Update uid column values in dashboard" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:12.779412994Z level=info msg="Migration successfully executed" id="Update uid column values in dashboard" duration=567.421µs 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:12.848613906Z level=info msg="Executing migration" id="Add unique index dashboard_org_id_uid" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:12.850409841Z level=info msg="Migration successfully executed" id="Add unique index dashboard_org_id_uid" duration=1.794555ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:12.980018591Z level=info msg="Executing migration" id="Remove unique index org_id_slug" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:12.980821467Z level=info msg="Migration successfully executed" id="Remove unique index org_id_slug" duration=804.066µs 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:13.068400731Z level=info msg="Executing migration" id="Update dashboard title length" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:13.068453392Z level=info msg="Migration successfully executed" id="Update dashboard title length" duration=54.191µs 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:13.136253516Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_title_folder_id" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:13.137018221Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_title_folder_id" duration=764.265µs 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:13.247649374Z level=info msg="Executing migration" id="create dashboard_provisioning" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:13.248938919Z level=info msg="Migration successfully executed" id="create dashboard_provisioning" duration=1.289585ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:13.329249395Z level=info msg="Executing migration" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:13.336960895Z level=info msg="Migration successfully executed" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" duration=7.71055ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:13.40070182Z level=info msg="Executing migration" id="create dashboard_provisioning v2" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:13.403025995Z level=info msg="Migration successfully executed" id="create dashboard_provisioning v2" duration=2.247843ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:13.471539253Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id - v2" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:13.473172894Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id - v2" duration=1.636872ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:13.524456008Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:13.52611019Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" duration=1.649662ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:13.642591017Z level=info msg="Executing migration" id="copy dashboard_provisioning v1 to v2" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:13.643568676Z level=info msg="Migration successfully executed" id="copy dashboard_provisioning v1 to v2" duration=977.779µs 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:13.665217515Z level=info msg="Executing migration" id="drop dashboard_provisioning_tmp_qwerty" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:13.666276746Z level=info msg="Migration successfully executed" id="drop dashboard_provisioning_tmp_qwerty" duration=1.058801ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:13.699472269Z level=info msg="Executing migration" id="Add check_sum column" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:13.703904525Z level=info msg="Migration successfully executed" id="Add check_sum column" duration=4.432226ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:13.802531506Z level=info msg="Executing migration" id="Add index for dashboard_title" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:13.804447833Z level=info msg="Migration successfully executed" id="Add index for dashboard_title" duration=1.917567ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:13.855045203Z level=info msg="Executing migration" id="delete tags for deleted dashboards" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:13.855530843Z level=info msg="Migration successfully executed" id="delete tags for deleted dashboards" duration=482.29µs 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:13.956907477Z level=info msg="Executing migration" id="delete stars for deleted dashboards" 23:17:10 mariadb | 2024-03-09 23:14:02+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.10.2+maria~ubu2204 started. 23:17:10 mariadb | 2024-03-09 23:14:03+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql' 23:17:10 mariadb | 2024-03-09 23:14:03+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.10.2+maria~ubu2204 started. 23:17:10 mariadb | 2024-03-09 23:14:03+00:00 [Note] [Entrypoint]: Initializing database files 23:17:10 mariadb | 2024-03-09 23:14:03 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) 23:17:10 mariadb | 2024-03-09 23:14:03 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF 23:17:10 mariadb | 2024-03-09 23:14:04 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. 23:17:10 mariadb | 23:17:10 mariadb | 23:17:10 mariadb | PLEASE REMEMBER TO SET A PASSWORD FOR THE MariaDB root USER ! 23:17:10 mariadb | To do so, start the server, then issue the following command: 23:17:10 mariadb | 23:17:10 mariadb | '/usr/bin/mysql_secure_installation' 23:17:10 mariadb | 23:17:10 mariadb | which will also give you the option of removing the test 23:17:10 mariadb | databases and anonymous user created by default. This is 23:17:10 mariadb | strongly recommended for production servers. 23:17:10 mariadb | 23:17:10 mariadb | See the MariaDB Knowledgebase at https://mariadb.com/kb 23:17:10 mariadb | 23:17:10 mariadb | Please report any problems at https://mariadb.org/jira 23:17:10 mariadb | 23:17:10 mariadb | The latest information about MariaDB is available at https://mariadb.org/. 23:17:10 mariadb | 23:17:10 mariadb | Consider joining MariaDB's strong and vibrant community: 23:17:10 mariadb | https://mariadb.org/get-involved/ 23:17:10 mariadb | 23:17:10 mariadb | 2024-03-09 23:14:17+00:00 [Note] [Entrypoint]: Database files initialized 23:17:10 mariadb | 2024-03-09 23:14:17+00:00 [Note] [Entrypoint]: Starting temporary server 23:17:10 mariadb | 2024-03-09 23:14:17+00:00 [Note] [Entrypoint]: Waiting for server startup 23:17:10 mariadb | 2024-03-09 23:14:17 0 [Note] mariadbd (server 10.10.2-MariaDB-1:10.10.2+maria~ubu2204) starting as process 95 ... 23:17:10 mariadb | 2024-03-09 23:14:17 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 23:17:10 mariadb | 2024-03-09 23:14:17 0 [Note] InnoDB: Number of transaction pools: 1 23:17:10 mariadb | 2024-03-09 23:14:17 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions 23:17:10 mariadb | 2024-03-09 23:14:17 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts) 23:17:10 mariadb | 2024-03-09 23:14:17 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) 23:17:10 mariadb | 2024-03-09 23:14:17 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF 23:17:10 mariadb | 2024-03-09 23:14:17 0 [Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB 23:17:10 mariadb | 2024-03-09 23:14:17 0 [Note] InnoDB: Completed initialization of buffer pool 23:17:10 mariadb | 2024-03-09 23:14:17 0 [Note] InnoDB: File system buffers for log disabled (block size=512 bytes) 23:17:10 mariadb | 2024-03-09 23:14:17 0 [Note] InnoDB: 128 rollback segments are active. 23:17:10 mariadb | 2024-03-09 23:14:17 0 [Note] InnoDB: Setting file './ibtmp1' size to 12.000MiB. Physically writing the file full; Please wait ... 23:17:10 mariadb | 2024-03-09 23:14:17 0 [Note] InnoDB: File './ibtmp1' size is now 12.000MiB. 23:17:10 mariadb | 2024-03-09 23:14:17 0 [Note] InnoDB: log sequence number 46456; transaction id 14 23:17:10 mariadb | 2024-03-09 23:14:17 0 [Note] Plugin 'FEEDBACK' is disabled. 23:17:10 mariadb | 2024-03-09 23:14:17 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. 23:17:10 mariadb | 2024-03-09 23:14:17 0 [Warning] 'user' entry 'root@mariadb' ignored in --skip-name-resolve mode. 23:17:10 mariadb | 2024-03-09 23:14:17 0 [Warning] 'proxies_priv' entry '@% root@mariadb' ignored in --skip-name-resolve mode. 23:17:10 mariadb | 2024-03-09 23:14:17 0 [Note] mariadbd: ready for connections. 23:17:10 mariadb | Version: '10.10.2-MariaDB-1:10.10.2+maria~ubu2204' socket: '/run/mysqld/mysqld.sock' port: 0 mariadb.org binary distribution 23:17:10 mariadb | 2024-03-09 23:14:18+00:00 [Note] [Entrypoint]: Temporary server started. 23:17:10 mariadb | 2024-03-09 23:14:20+00:00 [Note] [Entrypoint]: Creating user policy_user 23:17:10 mariadb | 2024-03-09 23:14:20+00:00 [Note] [Entrypoint]: Securing system users (equivalent to running mysql_secure_installation) 23:17:10 mariadb | 23:17:10 mariadb | 23:17:10 mariadb | 2024-03-09 23:14:20+00:00 [Warn] [Entrypoint]: /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/db.conf 23:17:10 mariadb | 2024-03-09 23:14:20+00:00 [Note] [Entrypoint]: /usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/db.sh 23:17:10 mariadb | #!/bin/bash -xv 23:17:10 mariadb | # Copyright 2019,2021 AT&T Intellectual Property. All rights reserved 23:17:10 mariadb | # Modifications Copyright (c) 2022 Nordix Foundation. 23:17:10 mariadb | # 23:17:10 kafka | [2024-03-09 23:14:13,920] INFO Client environment:java.class.path=/usr/share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/share/java/kafka/jersey-common-2.39.1.jar:/usr/share/java/kafka/swagger-annotations-2.2.8.jar:/usr/share/java/kafka/jose4j-0.9.3.jar:/usr/share/java/kafka/commons-validator-1.7.jar:/usr/share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/share/java/kafka/netty-transport-4.1.100.Final.jar:/usr/share/java/kafka/rocksdbjni-7.9.2.jar:/usr/share/java/kafka/jackson-annotations-2.13.5.jar:/usr/share/java/kafka/commons-io-2.11.0.jar:/usr/share/java/kafka/javax.activation-api-1.2.0.jar:/usr/share/java/kafka/connect-mirror-7.6.0-ccs.jar:/usr/share/java/kafka/commons-cli-1.4.jar:/usr/share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/share/java/kafka/scala-reflect-2.13.11.jar:/usr/share/java/kafka/netty-buffer-4.1.100.Final.jar:/usr/share/java/kafka/jline-3.22.0.jar:/usr/share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/share/java/kafka/kafka-server-common-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-tools-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-log4j-appender-7.6.0-ccs.jar:/usr/share/java/kafka/connect-json-7.6.0-ccs.jar:/usr/share/java/kafka/netty-resolver-4.1.100.Final.jar:/usr/share/java/kafka/hk2-api-2.6.1.jar:/usr/share/java/kafka/zstd-jni-1.5.5-1.jar:/usr/share/java/kafka/jetty-http-9.4.53.v20231009.jar:/usr/share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/share/java/kafka/kafka.jar:/usr/share/java/kafka/jetty-servlets-9.4.53.v20231009.jar:/usr/share/java/kafka/scala-library-2.13.11.jar:/usr/share/java/kafka/jakarta.inject-2.6.1.jar:/usr/share/java/kafka/jetty-continuation-9.4.53.v20231009.jar:/usr/share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/share/java/kafka/connect-basic-auth-extension-7.6.0-ccs.jar:/usr/share/java/kafka/hk2-locator-2.6.1.jar:/usr/share/java/kafka/reflections-0.10.2.jar:/usr/share/java/kafka/slf4j-api-1.7.36.jar:/usr/share/java/kafka/jetty-server-9.4.53.v20231009.jar:/usr/share/java/kafka/paranamer-2.8.jar:/usr/share/java/kafka/commons-beanutils-1.9.4.jar:/usr/share/java/kafka/jaxb-api-2.3.1.jar:/usr/share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/share/java/kafka/hk2-utils-2.6.1.jar:/usr/share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/share/java/kafka/kafka-streams-examples-7.6.0-ccs.jar:/usr/share/java/kafka/reload4j-1.2.25.jar:/usr/share/java/kafka/connect-transforms-7.6.0-ccs.jar:/usr/share/java/kafka/jackson-core-2.13.5.jar:/usr/share/java/kafka/jersey-hk2-2.39.1.jar:/usr/share/java/kafka/jackson-databind-2.13.5.jar:/usr/share/java/kafka/jersey-client-2.39.1.jar:/usr/share/java/kafka/jetty-io-9.4.53.v20231009.jar:/usr/share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/share/java/kafka/connect-api-7.6.0-ccs.jar:/usr/share/java/kafka/commons-digester-2.1.jar:/usr/share/java/kafka/netty-transport-native-epoll-4.1.100.Final.jar:/usr/share/java/kafka/argparse4j-0.7.0.jar:/usr/share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/share/java/kafka/kafka-raft-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-streams-scala_2.13-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-group-coordinator-7.6.0-ccs.jar:/usr/share/java/kafka/audience-annotations-0.12.0.jar:/usr/share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/share/java/kafka/kafka-storage-api-7.6.0-ccs.jar:/usr/share/java/kafka/maven-artifact-3.8.8.jar:/usr/share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/share/java/kafka/kafka-metadata-7.6.0-ccs.jar:/usr/share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/share/java/kafka/jersey-server-2.39.1.jar:/usr/share/java/kafka/commons-lang3-3.8.1.jar:/usr/share/java/kafka/kafka-streams-test-utils-7.6.0-ccs.jar:/usr/share/java/kafka/jopt-simple-5.0.4.jar:/usr/share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/share/java/kafka/lz4-java-1.8.0.jar:/usr/share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/share/java/kafka/checker-qual-3.19.0.jar:/usr/share/java/kafka/connect-mirror-client-7.6.0-ccs.jar:/usr/share/java/kafka/pcollections-4.0.1.jar:/usr/share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/share/java/kafka/jetty-client-9.4.53.v20231009.jar:/usr/share/java/kafka/jetty-util-9.4.53.v20231009.jar:/usr/share/java/kafka/kafka-tools-api-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-streams-7.6.0-ccs.jar:/usr/share/java/kafka/kafka_2.13-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-clients-7.6.0-ccs.jar:/usr/share/java/kafka/commons-logging-1.2.jar:/usr/share/java/kafka/jsr305-3.0.2.jar:/usr/share/java/kafka/trogdor-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-shell-7.6.0-ccs.jar:/usr/share/java/kafka/netty-codec-4.1.100.Final.jar:/usr/share/java/kafka/netty-transport-native-unix-common-4.1.100.Final.jar:/usr/share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/share/java/kafka/metrics-core-2.2.0.jar:/usr/share/java/kafka/zookeeper-jute-3.8.3.jar:/usr/share/java/kafka/connect-runtime-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-storage-7.6.0-ccs.jar:/usr/share/java/kafka/netty-handler-4.1.100.Final.jar:/usr/share/java/kafka/commons-collections-3.2.2.jar:/usr/share/java/kafka/javassist-3.29.2-GA.jar:/usr/share/java/kafka/caffeine-2.9.3.jar:/usr/share/java/kafka/plexus-utils-3.3.1.jar:/usr/share/java/kafka/zookeeper-3.8.3.jar:/usr/share/java/kafka/activation-1.1.1.jar:/usr/share/java/kafka/netty-common-4.1.100.Final.jar:/usr/share/java/kafka/metrics-core-4.1.12.1.jar:/usr/share/java/kafka/jetty-servlet-9.4.53.v20231009.jar:/usr/share/java/kafka/jetty-util-ajax-9.4.53.v20231009.jar:/usr/share/java/kafka/snappy-java-1.1.10.5.jar:/usr/share/java/kafka/netty-transport-classes-epoll-4.1.100.Final.jar:/usr/share/java/kafka/jetty-security-9.4.53.v20231009.jar:/usr/share/java/cp-base-new/scala-logging_2.13-3.9.4.jar:/usr/share/java/cp-base-new/jackson-datatype-jdk8-2.14.2.jar:/usr/share/java/cp-base-new/jose4j-0.9.3.jar:/usr/share/java/cp-base-new/commons-validator-1.7.jar:/usr/share/java/cp-base-new/logredactor-1.0.12.jar:/usr/share/java/cp-base-new/jolokia-core-1.7.1.jar:/usr/share/java/cp-base-new/re2j-1.6.jar:/usr/share/java/cp-base-new/scala-library-2.13.10.jar:/usr/share/java/cp-base-new/commons-io-2.11.0.jar:/usr/share/java/cp-base-new/commons-cli-1.4.jar:/usr/share/java/cp-base-new/slf4j-reload4j-1.7.36.jar:/usr/share/java/cp-base-new/jackson-annotations-2.14.2.jar:/usr/share/java/cp-base-new/json-simple-1.1.1.jar:/usr/share/java/cp-base-new/jackson-module-scala_2.13-2.14.2.jar:/usr/share/java/cp-base-new/scala-java8-compat_2.13-1.0.2.jar:/usr/share/java/cp-base-new/kafka-server-common-7.6.0-ccs.jar:/usr/share/java/cp-base-new/minimal-json-0.9.5.jar:/usr/share/java/cp-base-new/zstd-jni-1.5.5-1.jar:/usr/share/java/cp-base-new/jackson-dataformat-yaml-2.14.2.jar:/usr/share/java/cp-base-new/slf4j-api-1.7.36.jar:/usr/share/java/cp-base-new/paranamer-2.8.jar:/usr/share/java/cp-base-new/commons-beanutils-1.9.4.jar:/usr/share/java/cp-base-new/jmx_prometheus_javaagent-0.18.0.jar:/usr/share/java/cp-base-new/reload4j-1.2.25.jar:/usr/share/java/cp-base-new/jackson-core-2.14.2.jar:/usr/share/java/cp-base-new/common-utils-7.6.0.jar:/usr/share/java/cp-base-new/commons-digester-2.1.jar:/usr/share/java/cp-base-new/argparse4j-0.7.0.jar:/usr/share/java/cp-base-new/kafka-raft-7.6.0-ccs.jar:/usr/share/java/cp-base-new/kafka-group-coordinator-7.6.0-ccs.jar:/usr/share/java/cp-base-new/audience-annotations-0.12.0.jar:/usr/share/java/cp-base-new/gson-2.9.0.jar:/usr/share/java/cp-base-new/snakeyaml-2.0.jar:/usr/share/java/cp-base-new/kafka-storage-api-7.6.0-ccs.jar:/usr/share/java/cp-base-new/kafka-metadata-7.6.0-ccs.jar:/usr/share/java/cp-base-new/disk-usage-agent-7.6.0.jar:/usr/share/java/cp-base-new/jopt-simple-5.0.4.jar:/usr/share/java/cp-base-new/error_prone_annotations-2.10.0.jar:/usr/share/java/cp-base-new/lz4-java-1.8.0.jar:/usr/share/java/cp-base-new/logredactor-metrics-1.0.12.jar:/usr/share/java/cp-base-new/checker-qual-3.19.0.jar:/usr/share/java/cp-base-new/pcollections-4.0.1.jar:/usr/share/java/cp-base-new/kafka-tools-api-7.6.0-ccs.jar:/usr/share/java/cp-base-new/kafka_2.13-7.6.0-ccs.jar:/usr/share/java/cp-base-new/kafka-clients-7.6.0-ccs.jar:/usr/share/java/cp-base-new/commons-logging-1.2.jar:/usr/share/java/cp-base-new/scala-reflect-2.13.10.jar:/usr/share/java/cp-base-new/scala-collection-compat_2.13-2.10.0.jar:/usr/share/java/cp-base-new/metrics-core-2.2.0.jar:/usr/share/java/cp-base-new/zookeeper-jute-3.8.3.jar:/usr/share/java/cp-base-new/jackson-dataformat-csv-2.14.2.jar:/usr/share/java/cp-base-new/utility-belt-7.6.0.jar:/usr/share/java/cp-base-new/kafka-storage-7.6.0-ccs.jar:/usr/share/java/cp-base-new/commons-collections-3.2.2.jar:/usr/share/java/cp-base-new/caffeine-2.9.3.jar:/usr/share/java/cp-base-new/jolokia-jvm-1.7.1.jar:/usr/share/java/cp-base-new/zookeeper-3.8.3.jar:/usr/share/java/cp-base-new/metrics-core-4.1.12.1.jar:/usr/share/java/cp-base-new/jackson-databind-2.14.2.jar:/usr/share/java/cp-base-new/snappy-java-1.1.10.5.jar (org.apache.zookeeper.ZooKeeper) 23:17:10 kafka | [2024-03-09 23:14:13,920] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) 23:17:10 kafka | [2024-03-09 23:14:13,921] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) 23:17:10 kafka | [2024-03-09 23:14:13,921] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) 23:17:10 kafka | [2024-03-09 23:14:13,921] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) 23:17:10 kafka | [2024-03-09 23:14:13,921] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) 23:17:10 kafka | [2024-03-09 23:14:13,921] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) 23:17:10 kafka | [2024-03-09 23:14:13,921] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) 23:17:10 kafka | [2024-03-09 23:14:13,921] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) 23:17:10 kafka | [2024-03-09 23:14:13,921] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) 23:17:10 kafka | [2024-03-09 23:14:13,921] INFO Client environment:os.memory.free=487MB (org.apache.zookeeper.ZooKeeper) 23:17:10 kafka | [2024-03-09 23:14:13,921] INFO Client environment:os.memory.max=8042MB (org.apache.zookeeper.ZooKeeper) 23:17:10 kafka | [2024-03-09 23:14:13,921] INFO Client environment:os.memory.total=504MB (org.apache.zookeeper.ZooKeeper) 23:17:10 kafka | [2024-03-09 23:14:13,925] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=40000 watcher=io.confluent.admin.utils.ZookeeperConnectionWatcher@184cf7cf (org.apache.zookeeper.ZooKeeper) 23:17:10 kafka | [2024-03-09 23:14:13,929] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) 23:17:10 kafka | [2024-03-09 23:14:13,934] INFO jute.maxbuffer value is 1048575 Bytes (org.apache.zookeeper.ClientCnxnSocket) 23:17:10 kafka | [2024-03-09 23:14:13,942] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) 23:17:10 kafka | [2024-03-09 23:14:13,958] INFO Opening socket connection to server zookeeper/172.17.0.4:2181. (org.apache.zookeeper.ClientCnxn) 23:17:10 kafka | [2024-03-09 23:14:13,959] INFO SASL config status: Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn) 23:17:10 kafka | [2024-03-09 23:14:13,967] INFO Socket connection established, initiating session, client: /172.17.0.8:36112, server: zookeeper/172.17.0.4:2181 (org.apache.zookeeper.ClientCnxn) 23:17:10 kafka | [2024-03-09 23:14:14,031] INFO Session establishment complete on server zookeeper/172.17.0.4:2181, session id = 0x100000394bc0000, negotiated timeout = 40000 (org.apache.zookeeper.ClientCnxn) 23:17:10 kafka | [2024-03-09 23:14:14,160] INFO Session: 0x100000394bc0000 closed (org.apache.zookeeper.ZooKeeper) 23:17:10 kafka | [2024-03-09 23:14:14,160] INFO EventThread shut down for session: 0x100000394bc0000 (org.apache.zookeeper.ClientCnxn) 23:17:10 kafka | Using log4j config /etc/kafka/log4j.properties 23:17:10 kafka | ===> Launching ... 23:17:10 kafka | ===> Launching kafka ... 23:17:10 kafka | [2024-03-09 23:14:14,919] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$) 23:17:10 kafka | [2024-03-09 23:14:15,289] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) 23:17:10 kafka | [2024-03-09 23:14:15,373] INFO Registered signal handlers for TERM, INT, HUP (org.apache.kafka.common.utils.LoggingSignalHandler) 23:17:10 kafka | [2024-03-09 23:14:15,375] INFO starting (kafka.server.KafkaServer) 23:17:10 kafka | [2024-03-09 23:14:15,375] INFO Connecting to zookeeper on zookeeper:2181 (kafka.server.KafkaServer) 23:17:10 kafka | [2024-03-09 23:14:15,389] INFO [ZooKeeperClient Kafka server] Initializing a new session to zookeeper:2181. (kafka.zookeeper.ZooKeeperClient) 23:17:10 kafka | [2024-03-09 23:14:15,393] INFO Client environment:zookeeper.version=3.8.3-6ad6d364c7c0bcf0de452d54ebefa3058098ab56, built on 2023-10-05 10:34 UTC (org.apache.zookeeper.ZooKeeper) 23:17:10 kafka | [2024-03-09 23:14:15,393] INFO Client environment:host.name=08384d5ceb43 (org.apache.zookeeper.ZooKeeper) 23:17:10 kafka | [2024-03-09 23:14:15,393] INFO Client environment:java.version=11.0.21 (org.apache.zookeeper.ZooKeeper) 23:17:10 kafka | [2024-03-09 23:14:15,393] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) 23:17:10 kafka | [2024-03-09 23:14:15,394] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:13.957256164Z level=info msg="Migration successfully executed" id="delete stars for deleted dashboards" duration=350.867µs 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:13.98182135Z level=info msg="Executing migration" id="Add index for dashboard_is_folder" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:13.983560904Z level=info msg="Migration successfully executed" id="Add index for dashboard_is_folder" duration=1.739354ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:14.023998837Z level=info msg="Executing migration" id="Add isPublic for dashboard" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:14.028090536Z level=info msg="Migration successfully executed" id="Add isPublic for dashboard" duration=4.091699ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:14.062220717Z level=info msg="Executing migration" id="create data_source table" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:14.064090273Z level=info msg="Migration successfully executed" id="create data_source table" duration=1.915407ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:14.109869059Z level=info msg="Executing migration" id="add index data_source.account_id" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:14.111644243Z level=info msg="Migration successfully executed" id="add index data_source.account_id" duration=1.776794ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:14.190831856Z level=info msg="Executing migration" id="add unique index data_source.account_id_name" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:14.192870585Z level=info msg="Migration successfully executed" id="add unique index data_source.account_id_name" duration=2.041549ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:14.396289182Z level=info msg="Executing migration" id="drop index IDX_data_source_account_id - v1" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:14.398169648Z level=info msg="Migration successfully executed" id="drop index IDX_data_source_account_id - v1" duration=1.884096ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:14.490608697Z level=info msg="Executing migration" id="drop index UQE_data_source_account_id_name - v1" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:14.492013244Z level=info msg="Migration successfully executed" id="drop index UQE_data_source_account_id_name - v1" duration=1.408317ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:14.523513304Z level=info msg="Executing migration" id="Rename table data_source to data_source_v1 - v1" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:14.531206543Z level=info msg="Migration successfully executed" id="Rename table data_source to data_source_v1 - v1" duration=7.693189ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:14.543788817Z level=info msg="Executing migration" id="create data_source table v2" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:14.545460109Z level=info msg="Migration successfully executed" id="create data_source table v2" duration=1.671293ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:14.648462292Z level=info msg="Executing migration" id="create index IDX_data_source_org_id - v2" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:14.650533172Z level=info msg="Migration successfully executed" id="create index IDX_data_source_org_id - v2" duration=2.072ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:14.663109556Z level=info msg="Executing migration" id="create index UQE_data_source_org_id_name - v2" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:14.665016883Z level=info msg="Migration successfully executed" id="create index UQE_data_source_org_id_name - v2" duration=1.906677ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:14.704817013Z level=info msg="Executing migration" id="Drop old table data_source_v1 #2" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:14.706124448Z level=info msg="Migration successfully executed" id="Drop old table data_source_v1 #2" duration=1.307055ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:14.773220637Z level=info msg="Executing migration" id="Add column with_credentials" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:14.77961218Z level=info msg="Migration successfully executed" id="Add column with_credentials" duration=6.394113ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:14.948794925Z level=info msg="Executing migration" id="Add secure json data column" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:14.954440244Z level=info msg="Migration successfully executed" id="Add secure json data column" duration=5.648679ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:15.036085116Z level=info msg="Executing migration" id="Update data_source table charset" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:15.036200369Z level=info msg="Migration successfully executed" id="Update data_source table charset" duration=116.823µs 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:15.075050417Z level=info msg="Executing migration" id="Update initial version to 1" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:15.075543118Z level=info msg="Migration successfully executed" id="Update initial version to 1" duration=493.151µs 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:15.131350333Z level=info msg="Executing migration" id="Add read_only data column" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:15.136635418Z level=info msg="Migration successfully executed" id="Add read_only data column" duration=5.289405ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:15.153023962Z level=info msg="Executing migration" id="Migrate logging ds to loki ds" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:15.153344629Z level=info msg="Migration successfully executed" id="Migrate logging ds to loki ds" duration=321.437µs 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:15.204935496Z level=info msg="Executing migration" id="Update json_data with nulls" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:15.205343555Z level=info msg="Migration successfully executed" id="Update json_data with nulls" duration=409.129µs 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:15.215815801Z level=info msg="Executing migration" id="Add uid column" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:15.220114884Z level=info msg="Migration successfully executed" id="Add uid column" duration=4.300313ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:15.22729733Z level=info msg="Executing migration" id="Update uid value" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:15.227512665Z level=info msg="Migration successfully executed" id="Update uid value" duration=214.914µs 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:15.314702771Z level=info msg="Executing migration" id="Add unique index datasource_org_id_uid" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:15.317077503Z level=info msg="Migration successfully executed" id="Add unique index datasource_org_id_uid" duration=2.379182ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:15.352752905Z level=info msg="Executing migration" id="add unique index datasource_org_id_is_default" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:15.354460142Z level=info msg="Migration successfully executed" id="add unique index datasource_org_id_is_default" duration=1.707627ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:15.423433184Z level=info msg="Executing migration" id="create api_key table" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:15.424537698Z level=info msg="Migration successfully executed" id="create api_key table" duration=1.107894ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:15.435588987Z level=info msg="Executing migration" id="add index api_key.account_id" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:15.436597229Z level=info msg="Migration successfully executed" id="add index api_key.account_id" duration=1.008242ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:15.570361684Z level=info msg="Executing migration" id="add index api_key.key" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:15.57252205Z level=info msg="Migration successfully executed" id="add index api_key.key" duration=2.164027ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:15.674898876Z level=info msg="Executing migration" id="add index api_key.account_id_name" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:15.677620785Z level=info msg="Migration successfully executed" id="add index api_key.account_id_name" duration=2.721619ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:15.709052855Z level=info msg="Executing migration" id="drop index IDX_api_key_account_id - v1" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:15.710869175Z level=info msg="Migration successfully executed" id="drop index IDX_api_key_account_id - v1" duration=1.81761ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:15.748314805Z level=info msg="Executing migration" id="drop index UQE_api_key_key - v1" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:15.750290698Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_key - v1" duration=1.976683ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:15.870774946Z level=info msg="Executing migration" id="drop index UQE_api_key_account_id_name - v1" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:15.872580595Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_account_id_name - v1" duration=1.80545ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:15.919226904Z level=info msg="Executing migration" id="Rename table api_key to api_key_v1 - v1" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:15.955284675Z level=info msg="Migration successfully executed" id="Rename table api_key to api_key_v1 - v1" duration=36.05693ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:16.017760376Z level=info msg="Executing migration" id="create api_key table v2" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:16.019457473Z level=info msg="Migration successfully executed" id="create api_key table v2" duration=1.696987ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:16.038613437Z level=info msg="Executing migration" id="create index IDX_api_key_org_id - v2" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:16.040619221Z level=info msg="Migration successfully executed" id="create index IDX_api_key_org_id - v2" duration=2.006333ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:16.120366214Z level=info msg="Executing migration" id="create index UQE_api_key_key - v2" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:16.122314587Z level=info msg="Migration successfully executed" id="create index UQE_api_key_key - v2" duration=1.948673ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:16.181847383Z level=info msg="Executing migration" id="create index UQE_api_key_org_id_name - v2" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:16.183879547Z level=info msg="Migration successfully executed" id="create index UQE_api_key_org_id_name - v2" duration=2.031834ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:16.248191147Z level=info msg="Executing migration" id="copy api_key v1 to v2" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:16.249176389Z level=info msg="Migration successfully executed" id="copy api_key v1 to v2" duration=984.771µs 23:17:10 kafka | [2024-03-09 23:14:15,394] INFO Client environment:java.class.path=/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/jose4j-0.9.3.jar:/usr/bin/../share/java/kafka/commons-validator-1.7.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.100.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/commons-io-2.11.0.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/connect-mirror-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.11.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.100.Final.jar:/usr/bin/../share/java/kafka/jline-3.22.0.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-tools-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/connect-json-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.100.Final.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.5-1.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/scala-library-2.13.11.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/reflections-0.10.2.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/commons-beanutils-1.9.4.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/connect-transforms-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/connect-api-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/commons-digester-2.1.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-raft-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.8.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/checker-qual-3.19.0.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/pcollections-4.0.1.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-clients-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/jsr305-3.0.2.jar:/usr/bin/../share/java/kafka/trogdor-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-shell-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.100.Final.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.3.jar:/usr/bin/../share/java/kafka/connect-runtime-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-storage-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.100.Final.jar:/usr/bin/../share/java/kafka/commons-collections-3.2.2.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/caffeine-2.9.3.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.1.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.3.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/netty-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.53.v20231009.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.ZooKeeper) 23:17:10 kafka | [2024-03-09 23:14:15,394] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) 23:17:10 kafka | [2024-03-09 23:14:15,394] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) 23:17:10 kafka | [2024-03-09 23:14:15,394] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) 23:17:10 kafka | [2024-03-09 23:14:15,394] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) 23:17:10 kafka | [2024-03-09 23:14:15,394] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) 23:17:10 kafka | [2024-03-09 23:14:15,394] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) 23:17:10 kafka | [2024-03-09 23:14:15,395] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) 23:17:10 kafka | [2024-03-09 23:14:15,395] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) 23:17:10 kafka | [2024-03-09 23:14:15,395] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) 23:17:10 kafka | [2024-03-09 23:14:15,395] INFO Client environment:os.memory.free=1007MB (org.apache.zookeeper.ZooKeeper) 23:17:10 kafka | [2024-03-09 23:14:15,395] INFO Client environment:os.memory.max=1024MB (org.apache.zookeeper.ZooKeeper) 23:17:10 kafka | [2024-03-09 23:14:15,395] INFO Client environment:os.memory.total=1024MB (org.apache.zookeeper.ZooKeeper) 23:17:10 kafka | [2024-03-09 23:14:15,397] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=18000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@1f6c9cd8 (org.apache.zookeeper.ZooKeeper) 23:17:10 kafka | [2024-03-09 23:14:15,401] INFO jute.maxbuffer value is 4194304 Bytes (org.apache.zookeeper.ClientCnxnSocket) 23:17:10 kafka | [2024-03-09 23:14:15,408] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) 23:17:10 mariadb | # Licensed under the Apache License, Version 2.0 (the "License"); 23:17:10 mariadb | # you may not use this file except in compliance with the License. 23:17:10 mariadb | # You may obtain a copy of the License at 23:17:10 mariadb | # 23:17:10 mariadb | # http://www.apache.org/licenses/LICENSE-2.0 23:17:10 mariadb | # 23:17:10 mariadb | # Unless required by applicable law or agreed to in writing, software 23:17:10 mariadb | # distributed under the License is distributed on an "AS IS" BASIS, 23:17:10 mariadb | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 23:17:10 mariadb | # See the License for the specific language governing permissions and 23:17:10 mariadb | # limitations under the License. 23:17:10 mariadb | 23:17:10 mariadb | for db in migration pooling policyadmin operationshistory clampacm policyclamp 23:17:10 mariadb | do 23:17:10 mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "CREATE DATABASE IF NOT EXISTS ${db};" 23:17:10 mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "GRANT ALL PRIVILEGES ON \`${db}\`.* TO '${MYSQL_USER}'@'%' ;" 23:17:10 mariadb | done 23:17:10 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 23:17:10 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS migration;' 23:17:10 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `migration`.* TO '\''policy_user'\''@'\''%'\'' ;' 23:17:10 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 23:17:10 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS pooling;' 23:17:10 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `pooling`.* TO '\''policy_user'\''@'\''%'\'' ;' 23:17:10 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 23:17:10 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS policyadmin;' 23:17:10 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `policyadmin`.* TO '\''policy_user'\''@'\''%'\'' ;' 23:17:10 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 23:17:10 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS operationshistory;' 23:17:10 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `operationshistory`.* TO '\''policy_user'\''@'\''%'\'' ;' 23:17:10 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 23:17:10 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS clampacm;' 23:17:10 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `clampacm`.* TO '\''policy_user'\''@'\''%'\'' ;' 23:17:10 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 23:17:10 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS policyclamp;' 23:17:10 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `policyclamp`.* TO '\''policy_user'\''@'\''%'\'' ;' 23:17:10 mariadb | 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:16.279087505Z level=info msg="Executing migration" id="Drop old table api_key_v1" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:16.280647099Z level=info msg="Migration successfully executed" id="Drop old table api_key_v1" duration=1.562114ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:16.320280146Z level=info msg="Executing migration" id="Update api_key table charset" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:16.320391248Z level=info msg="Migration successfully executed" id="Update api_key table charset" duration=112.572µs 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:16.380208331Z level=info msg="Executing migration" id="Add expires to api_key table" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:16.385387423Z level=info msg="Migration successfully executed" id="Add expires to api_key table" duration=5.176902ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:16.432569813Z level=info msg="Executing migration" id="Add service account foreign key" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:16.437345186Z level=info msg="Migration successfully executed" id="Add service account foreign key" duration=4.776363ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:16.493785766Z level=info msg="Executing migration" id="set service account foreign key to nil if 0" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:16.494242736Z level=info msg="Migration successfully executed" id="set service account foreign key to nil if 0" duration=461.46µs 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:16.534348523Z level=info msg="Executing migration" id="Add last_used_at to api_key table" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:16.537672485Z level=info msg="Migration successfully executed" id="Add last_used_at to api_key table" duration=3.323823ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:16.575515073Z level=info msg="Executing migration" id="Add is_revoked column to api_key table" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:16.577840443Z level=info msg="Migration successfully executed" id="Add is_revoked column to api_key table" duration=2.31299ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:16.666739464Z level=info msg="Executing migration" id="create dashboard_snapshot table v4" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:16.667614593Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v4" duration=874.929µs 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:16.747136842Z level=info msg="Executing migration" id="drop table dashboard_snapshot_v4 #1" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:16.748679646Z level=info msg="Migration successfully executed" id="drop table dashboard_snapshot_v4 #1" duration=1.542413ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:16.81458372Z level=info msg="Executing migration" id="create dashboard_snapshot table v5 #2" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:16.816988812Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v5 #2" duration=2.404422ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:16.867927363Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_key - v5" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:16.870040409Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_key - v5" duration=2.113036ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:16.946015401Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_delete_key - v5" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:16.94827876Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_delete_key - v5" duration=2.266509ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:17.097277309Z level=info msg="Executing migration" id="create index IDX_dashboard_snapshot_user_id - v5" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:17.099422555Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_snapshot_user_id - v5" duration=2.146496ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:17.145353836Z level=info msg="Executing migration" id="alter dashboard_snapshot to mediumtext v2" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:17.145592341Z level=info msg="Migration successfully executed" id="alter dashboard_snapshot to mediumtext v2" duration=238.345µs 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:17.261892212Z level=info msg="Executing migration" id="Update dashboard_snapshot table charset" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:17.262064766Z level=info msg="Migration successfully executed" id="Update dashboard_snapshot table charset" duration=176.184µs 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:17.33175992Z level=info msg="Executing migration" id="Add column external_delete_url to dashboard_snapshots table" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:17.335534291Z level=info msg="Migration successfully executed" id="Add column external_delete_url to dashboard_snapshots table" duration=3.776261ms 23:17:10 mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "FLUSH PRIVILEGES;" 23:17:10 mariadb | + mysql -uroot -psecret --execute 'FLUSH PRIVILEGES;' 23:17:10 mariadb | mysql -u"${MYSQL_USER}" -p"${MYSQL_PASSWORD}" -f policyclamp < /tmp/policy-clamp-create-tables.sql 23:17:10 mariadb | + mysql -upolicy_user -ppolicy_user -f policyclamp 23:17:10 mariadb | 23:17:10 mariadb | 2024-03-09 23:14:26+00:00 [Note] [Entrypoint]: Stopping temporary server 23:17:10 mariadb | 2024-03-09 23:14:26 0 [Note] mariadbd (initiated by: unknown): Normal shutdown 23:17:10 mariadb | 2024-03-09 23:14:26 0 [Note] InnoDB: FTS optimize thread exiting. 23:17:10 mariadb | 2024-03-09 23:14:26 0 [Note] InnoDB: Starting shutdown... 23:17:10 mariadb | 2024-03-09 23:14:26 0 [Note] InnoDB: Dumping buffer pool(s) to /var/lib/mysql/ib_buffer_pool 23:17:10 mariadb | 2024-03-09 23:14:26 0 [Note] InnoDB: Buffer pool(s) dump completed at 240309 23:14:26 23:17:10 mariadb | 2024-03-09 23:14:28 0 [Note] InnoDB: Removed temporary tablespace data file: "./ibtmp1" 23:17:10 mariadb | 2024-03-09 23:14:28 0 [Note] InnoDB: Shutdown completed; log sequence number 314911; transaction id 298 23:17:10 mariadb | 2024-03-09 23:14:28 0 [Note] mariadbd: Shutdown complete 23:17:10 mariadb | 23:17:10 mariadb | 2024-03-09 23:14:28+00:00 [Note] [Entrypoint]: Temporary server stopped 23:17:10 mariadb | 23:17:10 mariadb | 2024-03-09 23:14:28+00:00 [Note] [Entrypoint]: MariaDB init process done. Ready for start up. 23:17:10 mariadb | 23:17:10 mariadb | 2024-03-09 23:14:28 0 [Note] mariadbd (server 10.10.2-MariaDB-1:10.10.2+maria~ubu2204) starting as process 1 ... 23:17:10 mariadb | 2024-03-09 23:14:28 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 23:17:10 mariadb | 2024-03-09 23:14:28 0 [Note] InnoDB: Number of transaction pools: 1 23:17:10 mariadb | 2024-03-09 23:14:28 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions 23:17:10 mariadb | 2024-03-09 23:14:28 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts) 23:17:10 mariadb | 2024-03-09 23:14:28 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) 23:17:10 mariadb | 2024-03-09 23:14:28 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF 23:17:10 mariadb | 2024-03-09 23:14:28 0 [Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB 23:17:10 mariadb | 2024-03-09 23:14:28 0 [Note] InnoDB: Completed initialization of buffer pool 23:17:10 mariadb | 2024-03-09 23:14:28 0 [Note] InnoDB: File system buffers for log disabled (block size=512 bytes) 23:17:10 mariadb | 2024-03-09 23:14:28 0 [Note] InnoDB: 128 rollback segments are active. 23:17:10 mariadb | 2024-03-09 23:14:28 0 [Note] InnoDB: Setting file './ibtmp1' size to 12.000MiB. Physically writing the file full; Please wait ... 23:17:10 mariadb | 2024-03-09 23:14:28 0 [Note] InnoDB: File './ibtmp1' size is now 12.000MiB. 23:17:10 mariadb | 2024-03-09 23:14:28 0 [Note] InnoDB: log sequence number 314911; transaction id 299 23:17:10 mariadb | 2024-03-09 23:14:28 0 [Note] InnoDB: Loading buffer pool(s) from /var/lib/mysql/ib_buffer_pool 23:17:10 mariadb | 2024-03-09 23:14:28 0 [Note] Plugin 'FEEDBACK' is disabled. 23:17:10 mariadb | 2024-03-09 23:14:28 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. 23:17:10 mariadb | 2024-03-09 23:14:28 0 [Warning] You need to use --log-bin to make --expire-logs-days or --binlog-expire-logs-seconds work. 23:17:10 mariadb | 2024-03-09 23:14:28 0 [Note] Server socket created on IP: '0.0.0.0'. 23:17:10 mariadb | 2024-03-09 23:14:28 0 [Note] Server socket created on IP: '::'. 23:17:10 mariadb | 2024-03-09 23:14:29 0 [Note] mariadbd: ready for connections. 23:17:10 mariadb | Version: '10.10.2-MariaDB-1:10.10.2+maria~ubu2204' socket: '/run/mysqld/mysqld.sock' port: 3306 mariadb.org binary distribution 23:17:10 mariadb | 2024-03-09 23:14:29 3 [Warning] Aborted connection 3 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.7' (This connection closed normally without authentication) 23:17:10 mariadb | 2024-03-09 23:14:29 0 [Note] InnoDB: Buffer pool(s) load completed at 240309 23:14:29 23:17:10 mariadb | 2024-03-09 23:14:29 7 [Warning] Aborted connection 7 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.9' (This connection closed normally without authentication) 23:17:10 mariadb | 2024-03-09 23:14:29 9 [Warning] Aborted connection 9 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.10' (This connection closed normally without authentication) 23:17:10 mariadb | 2024-03-09 23:14:29 26 [Warning] Aborted connection 26 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.11' (This connection closed normally without authentication) 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:17.380585434Z level=info msg="Executing migration" id="Add encrypted dashboard json column" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:17.386357209Z level=info msg="Migration successfully executed" id="Add encrypted dashboard json column" duration=5.769115ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:17.48602098Z level=info msg="Executing migration" id="Change dashboard_encrypted column to MEDIUMBLOB" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:17.486322957Z level=info msg="Migration successfully executed" id="Change dashboard_encrypted column to MEDIUMBLOB" duration=304.667µs 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:17.51891942Z level=info msg="Executing migration" id="create quota table v1" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:17.519777999Z level=info msg="Migration successfully executed" id="create quota table v1" duration=858.989µs 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:17.588930872Z level=info msg="Executing migration" id="create index UQE_quota_org_id_user_id_target - v1" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:17.590913924Z level=info msg="Migration successfully executed" id="create index UQE_quota_org_id_user_id_target - v1" duration=1.964353ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:17.642946738Z level=info msg="Executing migration" id="Update quota table charset" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:17.643070531Z level=info msg="Migration successfully executed" id="Update quota table charset" duration=123.623µs 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:17.673373085Z level=info msg="Executing migration" id="create plugin_setting table" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:17.674483169Z level=info msg="Migration successfully executed" id="create plugin_setting table" duration=1.109683ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:17.684127927Z level=info msg="Executing migration" id="create index UQE_plugin_setting_org_id_plugin_id - v1" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:17.685096698Z level=info msg="Migration successfully executed" id="create index UQE_plugin_setting_org_id_plugin_id - v1" duration=966.68µs 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:17.747296951Z level=info msg="Executing migration" id="Add column plugin_version to plugin_settings" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:17.753350241Z level=info msg="Migration successfully executed" id="Add column plugin_version to plugin_settings" duration=6.051721ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:17.831699053Z level=info msg="Executing migration" id="Update plugin_setting table charset" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:17.831921507Z level=info msg="Migration successfully executed" id="Update plugin_setting table charset" duration=223.885µs 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:17.865661395Z level=info msg="Executing migration" id="create session table" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:17.866751349Z level=info msg="Migration successfully executed" id="create session table" duration=1.089384ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:17.877483151Z level=info msg="Executing migration" id="Drop old table playlist table" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:17.877597623Z level=info msg="Migration successfully executed" id="Drop old table playlist table" duration=117.022µs 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:17.930634718Z level=info msg="Executing migration" id="Drop old table playlist_item table" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:17.930973126Z level=info msg="Migration successfully executed" id="Drop old table playlist_item table" duration=337.977µs 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:17.969225291Z level=info msg="Executing migration" id="create playlist table v2" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:17.970970209Z level=info msg="Migration successfully executed" id="create playlist table v2" duration=1.804059ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:17.975707591Z level=info msg="Executing migration" id="create playlist item table v2" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:17.976547889Z level=info msg="Migration successfully executed" id="create playlist item table v2" duration=838.448µs 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:17.984032781Z level=info msg="Executing migration" id="Update playlist table charset" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:17.984123653Z level=info msg="Migration successfully executed" id="Update playlist table charset" duration=99.612µs 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:17.988321543Z level=info msg="Executing migration" id="Update playlist_item table charset" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:17.988370744Z level=info msg="Migration successfully executed" id="Update playlist_item table charset" duration=42.971µs 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:17.991500692Z level=info msg="Executing migration" id="Add playlist column created_at" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:17.99509934Z level=info msg="Migration successfully executed" id="Add playlist column created_at" duration=3.598318ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:17.998056844Z level=info msg="Executing migration" id="Add playlist column updated_at" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:18.001397766Z level=info msg="Migration successfully executed" id="Add playlist column updated_at" duration=3.339652ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:18.006399234Z level=info msg="Executing migration" id="drop preferences table v2" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:18.006630809Z level=info msg="Migration successfully executed" id="drop preferences table v2" duration=226.825µs 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:18.055302897Z level=info msg="Executing migration" id="drop preferences table v3" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:18.055604874Z level=info msg="Migration successfully executed" id="drop preferences table v3" duration=300.747µs 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:18.060718024Z level=info msg="Executing migration" id="create preferences table v3" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:18.062200126Z level=info msg="Migration successfully executed" id="create preferences table v3" duration=1.456661ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:18.117285264Z level=info msg="Executing migration" id="Update preferences table charset" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:18.117365816Z level=info msg="Migration successfully executed" id="Update preferences table charset" duration=94.722µs 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:18.153634528Z level=info msg="Executing migration" id="Add column team_id in preferences" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:18.159515155Z level=info msg="Migration successfully executed" id="Add column team_id in preferences" duration=5.895517ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:18.231118858Z level=info msg="Executing migration" id="Update team_id column values in preferences" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:18.231385444Z level=info msg="Migration successfully executed" id="Update team_id column values in preferences" duration=270.086µs 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:18.256170198Z level=info msg="Executing migration" id="Add column week_start in preferences" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:18.260742547Z level=info msg="Migration successfully executed" id="Add column week_start in preferences" duration=4.569959ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:18.338591565Z level=info msg="Executing migration" id="Add column preferences.json_data" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:18.343654125Z level=info msg="Migration successfully executed" id="Add column preferences.json_data" duration=5.06684ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:18.375873429Z level=info msg="Executing migration" id="alter preferences.json_data to mediumtext v1" 23:17:10 kafka | [2024-03-09 23:14:15,409] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient) 23:17:10 kafka | [2024-03-09 23:14:15,414] INFO Opening socket connection to server zookeeper/172.17.0.4:2181. (org.apache.zookeeper.ClientCnxn) 23:17:10 kafka | [2024-03-09 23:14:15,423] INFO Socket connection established, initiating session, client: /172.17.0.8:58612, server: zookeeper/172.17.0.4:2181 (org.apache.zookeeper.ClientCnxn) 23:17:10 kafka | [2024-03-09 23:14:15,436] INFO Session establishment complete on server zookeeper/172.17.0.4:2181, session id = 0x100000394bc0001, negotiated timeout = 18000 (org.apache.zookeeper.ClientCnxn) 23:17:10 kafka | [2024-03-09 23:14:15,441] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient) 23:17:10 kafka | [2024-03-09 23:14:16,114] INFO Cluster ID = GSgJsqhRTlKOoxzH83EoHQ (kafka.server.KafkaServer) 23:17:10 kafka | [2024-03-09 23:14:16,119] WARN No meta.properties file under dir /var/lib/kafka/data/meta.properties (kafka.server.BrokerMetadataCheckpoint) 23:17:10 kafka | [2024-03-09 23:14:16,177] INFO KafkaConfig values: 23:17:10 kafka | advertised.listeners = PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092 23:17:10 kafka | alter.config.policy.class.name = null 23:17:10 kafka | alter.log.dirs.replication.quota.window.num = 11 23:17:10 kafka | alter.log.dirs.replication.quota.window.size.seconds = 1 23:17:10 kafka | authorizer.class.name = 23:17:10 kafka | auto.create.topics.enable = true 23:17:10 kafka | auto.include.jmx.reporter = true 23:17:10 kafka | auto.leader.rebalance.enable = true 23:17:10 kafka | background.threads = 10 23:17:10 kafka | broker.heartbeat.interval.ms = 2000 23:17:10 kafka | broker.id = 1 23:17:10 kafka | broker.id.generation.enable = true 23:17:10 kafka | broker.rack = null 23:17:10 kafka | broker.session.timeout.ms = 9000 23:17:10 kafka | client.quota.callback.class = null 23:17:10 kafka | compression.type = producer 23:17:10 kafka | connection.failed.authentication.delay.ms = 100 23:17:10 kafka | connections.max.idle.ms = 600000 23:17:10 kafka | connections.max.reauth.ms = 0 23:17:10 kafka | control.plane.listener.name = null 23:17:10 kafka | controlled.shutdown.enable = true 23:17:10 kafka | controlled.shutdown.max.retries = 3 23:17:10 kafka | controlled.shutdown.retry.backoff.ms = 5000 23:17:10 kafka | controller.listener.names = null 23:17:10 kafka | controller.quorum.append.linger.ms = 25 23:17:10 kafka | controller.quorum.election.backoff.max.ms = 1000 23:17:10 kafka | controller.quorum.election.timeout.ms = 1000 23:17:10 kafka | controller.quorum.fetch.timeout.ms = 2000 23:17:10 kafka | controller.quorum.request.timeout.ms = 2000 23:17:10 kafka | controller.quorum.retry.backoff.ms = 20 23:17:10 kafka | controller.quorum.voters = [] 23:17:10 kafka | controller.quota.window.num = 11 23:17:10 kafka | controller.quota.window.size.seconds = 1 23:17:10 kafka | controller.socket.timeout.ms = 30000 23:17:10 kafka | create.topic.policy.class.name = null 23:17:10 kafka | default.replication.factor = 1 23:17:10 kafka | delegation.token.expiry.check.interval.ms = 3600000 23:17:10 kafka | delegation.token.expiry.time.ms = 86400000 23:17:10 kafka | delegation.token.master.key = null 23:17:10 kafka | delegation.token.max.lifetime.ms = 604800000 23:17:10 kafka | delegation.token.secret.key = null 23:17:10 kafka | delete.records.purgatory.purge.interval.requests = 1 23:17:10 kafka | delete.topic.enable = true 23:17:10 kafka | early.start.listeners = null 23:17:10 kafka | fetch.max.bytes = 57671680 23:17:10 kafka | fetch.purgatory.purge.interval.requests = 1000 23:17:10 kafka | group.consumer.assignors = [org.apache.kafka.coordinator.group.assignor.RangeAssignor] 23:17:10 kafka | group.consumer.heartbeat.interval.ms = 5000 23:17:10 kafka | group.consumer.max.heartbeat.interval.ms = 15000 23:17:10 kafka | group.consumer.max.session.timeout.ms = 60000 23:17:10 kafka | group.consumer.max.size = 2147483647 23:17:10 kafka | group.consumer.min.heartbeat.interval.ms = 5000 23:17:10 kafka | group.consumer.min.session.timeout.ms = 45000 23:17:10 kafka | group.consumer.session.timeout.ms = 45000 23:17:10 kafka | group.coordinator.new.enable = false 23:17:10 kafka | group.coordinator.threads = 1 23:17:10 kafka | group.initial.rebalance.delay.ms = 3000 23:17:10 kafka | group.max.session.timeout.ms = 1800000 23:17:10 kafka | group.max.size = 2147483647 23:17:10 kafka | group.min.session.timeout.ms = 6000 23:17:10 kafka | initial.broker.registration.timeout.ms = 60000 23:17:10 kafka | inter.broker.listener.name = PLAINTEXT 23:17:10 kafka | inter.broker.protocol.version = 3.6-IV2 23:17:10 kafka | kafka.metrics.polling.interval.secs = 10 23:17:10 kafka | kafka.metrics.reporters = [] 23:17:10 kafka | leader.imbalance.check.interval.seconds = 300 23:17:10 kafka | leader.imbalance.per.broker.percentage = 10 23:17:10 kafka | listener.security.protocol.map = PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT 23:17:10 kafka | listeners = PLAINTEXT://0.0.0.0:9092,PLAINTEXT_HOST://0.0.0.0:29092 23:17:10 kafka | log.cleaner.backoff.ms = 15000 23:17:10 kafka | log.cleaner.dedupe.buffer.size = 134217728 23:17:10 kafka | log.cleaner.delete.retention.ms = 86400000 23:17:10 kafka | log.cleaner.enable = true 23:17:10 kafka | log.cleaner.io.buffer.load.factor = 0.9 23:17:10 kafka | log.cleaner.io.buffer.size = 524288 23:17:10 kafka | log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 23:17:10 kafka | log.cleaner.max.compaction.lag.ms = 9223372036854775807 23:17:10 kafka | log.cleaner.min.cleanable.ratio = 0.5 23:17:10 kafka | log.cleaner.min.compaction.lag.ms = 0 23:17:10 kafka | log.cleaner.threads = 1 23:17:10 kafka | log.cleanup.policy = [delete] 23:17:10 kafka | log.dir = /tmp/kafka-logs 23:17:10 kafka | log.dirs = /var/lib/kafka/data 23:17:10 kafka | log.flush.interval.messages = 9223372036854775807 23:17:10 kafka | log.flush.interval.ms = null 23:17:10 kafka | log.flush.offset.checkpoint.interval.ms = 60000 23:17:10 kafka | log.flush.scheduler.interval.ms = 9223372036854775807 23:17:10 kafka | log.flush.start.offset.checkpoint.interval.ms = 60000 23:17:10 kafka | log.index.interval.bytes = 4096 23:17:10 kafka | log.index.size.max.bytes = 10485760 23:17:10 kafka | log.local.retention.bytes = -2 23:17:10 kafka | log.local.retention.ms = -2 23:17:10 kafka | log.message.downconversion.enable = true 23:17:10 kafka | log.message.format.version = 3.0-IV1 23:17:10 kafka | log.message.timestamp.after.max.ms = 9223372036854775807 23:17:10 kafka | log.message.timestamp.before.max.ms = 9223372036854775807 23:17:10 kafka | log.message.timestamp.difference.max.ms = 9223372036854775807 23:17:10 kafka | log.message.timestamp.type = CreateTime 23:17:10 kafka | log.preallocate = false 23:17:10 kafka | log.retention.bytes = -1 23:17:10 kafka | log.retention.check.interval.ms = 300000 23:17:10 kafka | log.retention.hours = 168 23:17:10 kafka | log.retention.minutes = null 23:17:10 kafka | log.retention.ms = null 23:17:10 kafka | log.roll.hours = 168 23:17:10 kafka | log.roll.jitter.hours = 0 23:17:10 kafka | log.roll.jitter.ms = null 23:17:10 kafka | log.roll.ms = null 23:17:10 kafka | log.segment.bytes = 1073741824 23:17:10 kafka | log.segment.delete.delay.ms = 60000 23:17:10 kafka | max.connection.creation.rate = 2147483647 23:17:10 kafka | max.connections = 2147483647 23:17:10 kafka | max.connections.per.ip = 2147483647 23:17:10 kafka | max.connections.per.ip.overrides = 23:17:10 kafka | max.incremental.fetch.session.cache.slots = 1000 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:18.376058363Z level=info msg="Migration successfully executed" id="alter preferences.json_data to mediumtext v1" duration=192.494µs 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:18.505526885Z level=info msg="Executing migration" id="Add preferences index org_id" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:18.507538748Z level=info msg="Migration successfully executed" id="Add preferences index org_id" duration=2.017534ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:18.551054656Z level=info msg="Executing migration" id="Add preferences index user_id" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:18.552387374Z level=info msg="Migration successfully executed" id="Add preferences index user_id" duration=1.347699ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:18.618444769Z level=info msg="Executing migration" id="create alert table v1" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:18.62035573Z level=info msg="Migration successfully executed" id="create alert table v1" duration=1.911911ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:18.730554316Z level=info msg="Executing migration" id="add index alert org_id & id " 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:18.732522709Z level=info msg="Migration successfully executed" id="add index alert org_id & id " duration=1.969573ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:18.78455201Z level=info msg="Executing migration" id="add index alert state" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:18.785878869Z level=info msg="Migration successfully executed" id="add index alert state" duration=1.326579ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:18.820293161Z level=info msg="Executing migration" id="add index alert dashboard_id" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:18.821777863Z level=info msg="Migration successfully executed" id="add index alert dashboard_id" duration=1.484972ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:18.876562674Z level=info msg="Executing migration" id="Create alert_rule_tag table v1" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:18.877905093Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v1" duration=1.339409ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:18.915708228Z level=info msg="Executing migration" id="Add unique index alert_rule_tag.alert_id_tag_id" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:18.917222731Z level=info msg="Migration successfully executed" id="Add unique index alert_rule_tag.alert_id_tag_id" duration=1.514313ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:18.934857301Z level=info msg="Executing migration" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:18.935981985Z level=info msg="Migration successfully executed" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" duration=1.124984ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:19.085055636Z level=info msg="Executing migration" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:19.098387473Z level=info msg="Migration successfully executed" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" duration=13.340387ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:19.135097154Z level=info msg="Executing migration" id="Create alert_rule_tag table v2" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:19.1363071Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v2" duration=1.213597ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:19.142290499Z level=info msg="Executing migration" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:19.143246319Z level=info msg="Migration successfully executed" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" duration=956.44µs 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:19.21528288Z level=info msg="Executing migration" id="copy alert_rule_tag v1 to v2" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:19.21574005Z level=info msg="Migration successfully executed" id="copy alert_rule_tag v1 to v2" duration=460.26µs 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:19.219629214Z level=info msg="Executing migration" id="drop table alert_rule_tag_v1" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:19.220716477Z level=info msg="Migration successfully executed" id="drop table alert_rule_tag_v1" duration=1.088283ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:19.238407178Z level=info msg="Executing migration" id="create alert_notification table v1" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:19.239694346Z level=info msg="Migration successfully executed" id="create alert_notification table v1" duration=1.287128ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:19.292281198Z level=info msg="Executing migration" id="Add column is_default" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:19.298234506Z level=info msg="Migration successfully executed" id="Add column is_default" duration=5.954188ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:19.344421501Z level=info msg="Executing migration" id="Add column frequency" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:19.350212765Z level=info msg="Migration successfully executed" id="Add column frequency" duration=5.786804ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:19.38062386Z level=info msg="Executing migration" id="Add column send_reminder" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:19.386466546Z level=info msg="Migration successfully executed" id="Add column send_reminder" duration=5.841326ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:19.414167373Z level=info msg="Executing migration" id="Add column disable_resolve_message" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:19.419372695Z level=info msg="Migration successfully executed" id="Add column disable_resolve_message" duration=5.196602ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:19.484266642Z level=info msg="Executing migration" id="add index alert_notification org_id & name" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:19.485960809Z level=info msg="Migration successfully executed" id="add index alert_notification org_id & name" duration=1.697186ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:19.566424621Z level=info msg="Executing migration" id="Update alert table charset" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:19.566481973Z level=info msg="Migration successfully executed" id="Update alert table charset" duration=62.232µs 23:17:10 policy-apex-pdp | Waiting for mariadb port 3306... 23:17:10 policy-apex-pdp | mariadb (172.17.0.3:3306) open 23:17:10 policy-apex-pdp | Waiting for kafka port 9092... 23:17:10 policy-apex-pdp | kafka (172.17.0.8:9092) open 23:17:10 policy-apex-pdp | Waiting for pap port 6969... 23:17:10 policy-apex-pdp | pap (172.17.0.10:6969) open 23:17:10 policy-apex-pdp | apexApps.sh: running application 'onappf' with command 'java -Dlogback.configurationFile=/opt/app/policy/apex-pdp/etc/logback.xml -cp /opt/app/policy/apex-pdp/etc:/opt/app/policy/apex-pdp/etc/hazelcast:/opt/app/policy/apex-pdp/etc/infinispan:/opt/app/policy/apex-pdp/lib/* -Djavax.net.ssl.keyStore=/opt/app/policy/apex-pdp/etc/ssl/policy-keystore -Djavax.net.ssl.keyStorePassword=Pol1cy_0nap -Djavax.net.ssl.trustStore=/opt/app/policy/apex-pdp/etc/ssl/policy-truststore -Djavax.net.ssl.trustStorePassword=Pol1cy_0nap -Dlogback.configurationFile=/opt/app/policy/apex-pdp/etc/logback.xml -Dhazelcast.config=/opt/app/policy/apex-pdp/etc/hazelcast.xml -Dhazelcast.mancenter.enabled=false org.onap.policy.apex.services.onappf.ApexStarterMain -c /opt/app/policy/apex-pdp/etc/onappf/config/OnapPfConfig.json' 23:17:10 policy-apex-pdp | [2024-03-09T23:15:12.820+00:00|INFO|ApexStarterMain|main] In ApexStarter with parameters [-c, /opt/app/policy/apex-pdp/etc/onappf/config/OnapPfConfig.json] 23:17:10 policy-apex-pdp | [2024-03-09T23:15:12.991+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 23:17:10 policy-apex-pdp | allow.auto.create.topics = true 23:17:10 policy-apex-pdp | auto.commit.interval.ms = 5000 23:17:10 policy-apex-pdp | auto.include.jmx.reporter = true 23:17:10 policy-apex-pdp | auto.offset.reset = latest 23:17:10 policy-apex-pdp | bootstrap.servers = [kafka:9092] 23:17:10 policy-apex-pdp | check.crcs = true 23:17:10 policy-apex-pdp | client.dns.lookup = use_all_dns_ips 23:17:10 policy-apex-pdp | client.id = consumer-dbe26ca2-4841-4841-92e1-919ee240973d-1 23:17:10 policy-apex-pdp | client.rack = 23:17:10 policy-apex-pdp | connections.max.idle.ms = 540000 23:17:10 policy-apex-pdp | default.api.timeout.ms = 60000 23:17:10 policy-apex-pdp | enable.auto.commit = true 23:17:10 policy-apex-pdp | exclude.internal.topics = true 23:17:10 policy-apex-pdp | fetch.max.bytes = 52428800 23:17:10 policy-apex-pdp | fetch.max.wait.ms = 500 23:17:10 policy-apex-pdp | fetch.min.bytes = 1 23:17:10 policy-apex-pdp | group.id = dbe26ca2-4841-4841-92e1-919ee240973d 23:17:10 policy-apex-pdp | group.instance.id = null 23:17:10 policy-apex-pdp | heartbeat.interval.ms = 3000 23:17:10 policy-apex-pdp | interceptor.classes = [] 23:17:10 policy-apex-pdp | internal.leave.group.on.close = true 23:17:10 policy-apex-pdp | internal.throw.on.fetch.stable.offset.unsupported = false 23:17:10 policy-apex-pdp | isolation.level = read_uncommitted 23:17:10 policy-apex-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:17:10 policy-apex-pdp | max.partition.fetch.bytes = 1048576 23:17:10 policy-apex-pdp | max.poll.interval.ms = 300000 23:17:10 policy-apex-pdp | max.poll.records = 500 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:19.582070868Z level=info msg="Executing migration" id="Update alert_notification table charset" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:19.582122789Z level=info msg="Migration successfully executed" id="Update alert_notification table charset" duration=55.951µs 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:19.618548194Z level=info msg="Executing migration" id="create notification_journal table v1" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:19.619691028Z level=info msg="Migration successfully executed" id="create notification_journal table v1" duration=1.146654ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:19.658046594Z level=info msg="Executing migration" id="add index notification_journal org_id & alert_id & notifier_id" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:19.660324563Z level=info msg="Migration successfully executed" id="add index notification_journal org_id & alert_id & notifier_id" duration=2.279609ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:19.670650595Z level=info msg="Executing migration" id="drop alert_notification_journal" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:19.671491544Z level=info msg="Migration successfully executed" id="drop alert_notification_journal" duration=840.339µs 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:19.692547117Z level=info msg="Executing migration" id="create alert_notification_state table v1" 23:17:10 kafka | message.max.bytes = 1048588 23:17:10 kafka | metadata.log.dir = null 23:17:10 kafka | metadata.log.max.record.bytes.between.snapshots = 20971520 23:17:10 kafka | metadata.log.max.snapshot.interval.ms = 3600000 23:17:10 kafka | metadata.log.segment.bytes = 1073741824 23:17:10 kafka | metadata.log.segment.min.bytes = 8388608 23:17:10 kafka | metadata.log.segment.ms = 604800000 23:17:10 kafka | metadata.max.idle.interval.ms = 500 23:17:10 kafka | metadata.max.retention.bytes = 104857600 23:17:10 kafka | metadata.max.retention.ms = 604800000 23:17:10 kafka | metric.reporters = [] 23:17:10 kafka | metrics.num.samples = 2 23:17:10 kafka | metrics.recording.level = INFO 23:17:10 kafka | metrics.sample.window.ms = 30000 23:17:10 kafka | min.insync.replicas = 1 23:17:10 kafka | node.id = 1 23:17:10 kafka | num.io.threads = 8 23:17:10 kafka | num.network.threads = 3 23:17:10 policy-apex-pdp | metadata.max.age.ms = 300000 23:17:10 policy-apex-pdp | metric.reporters = [] 23:17:10 policy-apex-pdp | metrics.num.samples = 2 23:17:10 policy-apex-pdp | metrics.recording.level = INFO 23:17:10 policy-apex-pdp | metrics.sample.window.ms = 30000 23:17:10 policy-apex-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 23:17:10 policy-apex-pdp | receive.buffer.bytes = 65536 23:17:10 policy-apex-pdp | reconnect.backoff.max.ms = 1000 23:17:10 policy-apex-pdp | reconnect.backoff.ms = 50 23:17:10 policy-apex-pdp | request.timeout.ms = 30000 23:17:10 policy-apex-pdp | retry.backoff.ms = 100 23:17:10 policy-apex-pdp | sasl.client.callback.handler.class = null 23:17:10 policy-apex-pdp | sasl.jaas.config = null 23:17:10 policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:17:10 policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 23:17:10 policy-apex-pdp | sasl.kerberos.service.name = null 23:17:10 policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 23:17:10 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 23:17:10 policy-apex-pdp | sasl.login.callback.handler.class = null 23:17:10 policy-apex-pdp | sasl.login.class = null 23:17:10 policy-apex-pdp | sasl.login.connect.timeout.ms = null 23:17:10 policy-apex-pdp | sasl.login.read.timeout.ms = null 23:17:10 policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 23:17:10 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 23:17:10 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 23:17:10 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 23:17:10 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 23:17:10 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 23:17:10 policy-apex-pdp | sasl.mechanism = GSSAPI 23:17:10 policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 23:17:10 policy-apex-pdp | sasl.oauthbearer.expected.audience = null 23:17:10 policy-apex-pdp | sasl.oauthbearer.expected.issuer = null 23:17:10 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:17:10 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:17:10 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:17:10 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null 23:17:10 policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope 23:17:10 policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub 23:17:10 policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null 23:17:10 policy-apex-pdp | security.protocol = PLAINTEXT 23:17:10 policy-apex-pdp | security.providers = null 23:17:10 policy-apex-pdp | send.buffer.bytes = 131072 23:17:10 policy-apex-pdp | session.timeout.ms = 45000 23:17:10 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 23:17:10 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 23:17:10 policy-apex-pdp | ssl.cipher.suites = null 23:17:10 policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:17:10 policy-apex-pdp | ssl.endpoint.identification.algorithm = https 23:17:10 policy-apex-pdp | ssl.engine.factory.class = null 23:17:10 policy-apex-pdp | ssl.key.password = null 23:17:10 policy-apex-pdp | ssl.keymanager.algorithm = SunX509 23:17:10 policy-apex-pdp | ssl.keystore.certificate.chain = null 23:17:10 policy-apex-pdp | ssl.keystore.key = null 23:17:10 policy-apex-pdp | ssl.keystore.location = null 23:17:10 policy-apex-pdp | ssl.keystore.password = null 23:17:10 policy-apex-pdp | ssl.keystore.type = JKS 23:17:10 policy-apex-pdp | ssl.protocol = TLSv1.3 23:17:10 policy-apex-pdp | ssl.provider = null 23:17:10 policy-apex-pdp | ssl.secure.random.implementation = null 23:17:10 policy-apex-pdp | ssl.trustmanager.algorithm = PKIX 23:17:10 policy-apex-pdp | ssl.truststore.certificates = null 23:17:10 policy-apex-pdp | ssl.truststore.location = null 23:17:10 policy-apex-pdp | ssl.truststore.password = null 23:17:10 policy-apex-pdp | ssl.truststore.type = JKS 23:17:10 policy-apex-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:17:10 policy-apex-pdp | 23:17:10 policy-apex-pdp | [2024-03-09T23:15:13.157+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 23:17:10 policy-apex-pdp | [2024-03-09T23:15:13.158+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 23:17:10 zookeeper_1 | [2024-03-09 23:14:08,223] INFO Server environment:java.class.path=/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/jose4j-0.9.3.jar:/usr/bin/../share/java/kafka/commons-validator-1.7.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.100.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/commons-io-2.11.0.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/connect-mirror-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.11.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.100.Final.jar:/usr/bin/../share/java/kafka/jline-3.22.0.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-tools-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/connect-json-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.100.Final.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.5-1.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/scala-library-2.13.11.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/reflections-0.10.2.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/commons-beanutils-1.9.4.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/connect-transforms-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/connect-api-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/commons-digester-2.1.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-raft-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.8.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/checker-qual-3.19.0.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/pcollections-4.0.1.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-clients-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/jsr305-3.0.2.jar:/usr/bin/../share/java/kafka/trogdor-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-shell-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.100.Final.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.3.jar:/usr/bin/../share/java/kafka/connect-runtime-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-storage-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.100.Final.jar:/usr/bin/../share/java/kafka/commons-collections-3.2.2.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/caffeine-2.9.3.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.1.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.3.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/netty-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.53.v20231009.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.server.ZooKeeperServer) 23:17:10 zookeeper_1 | [2024-03-09 23:14:08,223] INFO Server environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.server.ZooKeeperServer) 23:17:10 zookeeper_1 | [2024-03-09 23:14:08,224] INFO Server environment:java.io.tmpdir=/tmp (org.apache.zookeeper.server.ZooKeeperServer) 23:17:10 zookeeper_1 | [2024-03-09 23:14:08,224] INFO Server environment:java.compiler= (org.apache.zookeeper.server.ZooKeeperServer) 23:17:10 zookeeper_1 | [2024-03-09 23:14:08,224] INFO Server environment:os.name=Linux (org.apache.zookeeper.server.ZooKeeperServer) 23:17:10 zookeeper_1 | [2024-03-09 23:14:08,224] INFO Server environment:os.arch=amd64 (org.apache.zookeeper.server.ZooKeeperServer) 23:17:10 zookeeper_1 | [2024-03-09 23:14:08,224] INFO Server environment:os.version=4.15.0-192-generic (org.apache.zookeeper.server.ZooKeeperServer) 23:17:10 zookeeper_1 | [2024-03-09 23:14:08,224] INFO Server environment:user.name=appuser (org.apache.zookeeper.server.ZooKeeperServer) 23:17:10 zookeeper_1 | [2024-03-09 23:14:08,224] INFO Server environment:user.home=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) 23:17:10 zookeeper_1 | [2024-03-09 23:14:08,224] INFO Server environment:user.dir=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) 23:17:10 zookeeper_1 | [2024-03-09 23:14:08,224] INFO Server environment:os.memory.free=490MB (org.apache.zookeeper.server.ZooKeeperServer) 23:17:10 zookeeper_1 | [2024-03-09 23:14:08,224] INFO Server environment:os.memory.max=512MB (org.apache.zookeeper.server.ZooKeeperServer) 23:17:10 zookeeper_1 | [2024-03-09 23:14:08,225] INFO Server environment:os.memory.total=512MB (org.apache.zookeeper.server.ZooKeeperServer) 23:17:10 zookeeper_1 | [2024-03-09 23:14:08,225] INFO zookeeper.enableEagerACLCheck = false (org.apache.zookeeper.server.ZooKeeperServer) 23:17:10 zookeeper_1 | [2024-03-09 23:14:08,225] INFO zookeeper.digest.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) 23:17:10 zookeeper_1 | [2024-03-09 23:14:08,225] INFO zookeeper.closeSessionTxn.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) 23:17:10 zookeeper_1 | [2024-03-09 23:14:08,225] INFO zookeeper.flushDelay = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) 23:17:10 zookeeper_1 | [2024-03-09 23:14:08,225] INFO zookeeper.maxWriteQueuePollTime = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) 23:17:10 zookeeper_1 | [2024-03-09 23:14:08,226] INFO zookeeper.maxBatchSize=1000 (org.apache.zookeeper.server.ZooKeeperServer) 23:17:10 zookeeper_1 | [2024-03-09 23:14:08,226] INFO zookeeper.intBufferStartingSizeBytes = 1024 (org.apache.zookeeper.server.ZooKeeperServer) 23:17:10 zookeeper_1 | [2024-03-09 23:14:08,227] INFO Weighed connection throttling is disabled (org.apache.zookeeper.server.BlueThrottle) 23:17:10 zookeeper_1 | [2024-03-09 23:14:08,228] INFO minSessionTimeout set to 4000 ms (org.apache.zookeeper.server.ZooKeeperServer) 23:17:10 zookeeper_1 | [2024-03-09 23:14:08,228] INFO maxSessionTimeout set to 40000 ms (org.apache.zookeeper.server.ZooKeeperServer) 23:17:10 zookeeper_1 | [2024-03-09 23:14:08,229] INFO getData response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) 23:17:10 zookeeper_1 | [2024-03-09 23:14:08,229] INFO getChildren response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) 23:17:10 zookeeper_1 | [2024-03-09 23:14:08,231] INFO zookeeper.pathStats.slotCapacity = 60 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 23:17:10 zookeeper_1 | [2024-03-09 23:14:08,231] INFO zookeeper.pathStats.slotDuration = 15 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 23:17:10 zookeeper_1 | [2024-03-09 23:14:08,231] INFO zookeeper.pathStats.maxDepth = 6 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 23:17:10 zookeeper_1 | [2024-03-09 23:14:08,231] INFO zookeeper.pathStats.initialDelay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 23:17:10 zookeeper_1 | [2024-03-09 23:14:08,231] INFO zookeeper.pathStats.delay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 23:17:10 policy-api | Waiting for mariadb port 3306... 23:17:10 policy-api | mariadb (172.17.0.3:3306) open 23:17:10 policy-api | Waiting for policy-db-migrator port 6824... 23:17:10 policy-api | policy-db-migrator (172.17.0.7:6824) open 23:17:10 policy-api | Policy api config file: /opt/app/policy/api/etc/apiParameters.yaml 23:17:10 policy-api | 23:17:10 policy-api | . ____ _ __ _ _ 23:17:10 policy-api | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ 23:17:10 policy-api | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ 23:17:10 policy-api | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) 23:17:10 policy-api | ' |____| .__|_| |_|_| |_\__, | / / / / 23:17:10 policy-api | =========|_|==============|___/=/_/_/_/ 23:17:10 policy-api | :: Spring Boot :: (v3.1.8) 23:17:10 policy-api | 23:17:10 policy-api | [2024-03-09T23:14:46.733+00:00|INFO|PolicyApiApplication|main] Starting PolicyApiApplication using Java 17.0.10 with PID 47 (/app/api.jar started by policy in /opt/app/policy/api/bin) 23:17:10 policy-api | [2024-03-09T23:14:46.734+00:00|INFO|PolicyApiApplication|main] No active profile set, falling back to 1 default profile: "default" 23:17:10 policy-api | [2024-03-09T23:14:48.990+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. 23:17:10 policy-api | [2024-03-09T23:14:49.101+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 98 ms. Found 6 JPA repository interfaces. 23:17:10 policy-api | [2024-03-09T23:14:49.549+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.api.main.exception.ServiceExceptionHandler 23:17:10 policy-api | [2024-03-09T23:14:49.550+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.api.main.exception.ServiceExceptionHandler 23:17:10 policy-api | [2024-03-09T23:14:50.241+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port(s): 6969 (http) 23:17:10 policy-api | [2024-03-09T23:14:50.253+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] 23:17:10 policy-api | [2024-03-09T23:14:50.256+00:00|INFO|StandardService|main] Starting service [Tomcat] 23:17:10 policy-api | [2024-03-09T23:14:50.256+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.18] 23:17:10 policy-api | [2024-03-09T23:14:50.352+00:00|INFO|[/policy/api/v1]|main] Initializing Spring embedded WebApplicationContext 23:17:10 policy-api | [2024-03-09T23:14:50.353+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 3545 ms 23:17:10 policy-api | [2024-03-09T23:14:50.831+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] 23:17:10 policy-api | [2024-03-09T23:14:50.909+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.3.0.CR1 23:17:10 policy-api | [2024-03-09T23:14:50.913+00:00|INFO|Environment|main] HHH000406: Using bytecode reflection optimizer 23:17:10 policy-api | [2024-03-09T23:14:50.963+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled 23:17:10 policy-api | [2024-03-09T23:14:51.331+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer 23:17:10 policy-api | [2024-03-09T23:14:51.357+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... 23:17:10 policy-api | [2024-03-09T23:14:51.468+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.mariadb.jdbc.Connection@3f702946 23:17:10 policy-api | [2024-03-09T23:14:51.471+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. 23:17:10 policy-api | [2024-03-09T23:14:53.654+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) 23:17:10 policy-api | [2024-03-09T23:14:53.658+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' 23:17:10 policy-api | [2024-03-09T23:14:54.823+00:00|WARN|ApiDatabaseInitializer|main] Detected multi-versioned type: policytypes/onap.policies.monitoring.tcagen2.v2.yaml 23:17:10 policy-api | [2024-03-09T23:14:55.739+00:00|INFO|ApiDatabaseInitializer|main] Multi-versioned Service Template [onap.policies.Monitoring, onap.policies.monitoring.tcagen2] 23:17:10 policy-api | [2024-03-09T23:14:56.943+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning 23:17:10 policy-api | [2024-03-09T23:14:57.152+00:00|INFO|DefaultSecurityFilterChain|main] Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@5c1348c6, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@4f3eddc0, org.springframework.security.web.context.SecurityContextHolderFilter@69cf9acb, org.springframework.security.web.header.HeaderWriterFilter@62c4ad40, org.springframework.security.web.authentication.logout.LogoutFilter@dcaa0e8, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@3341ba8e, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@5f160f9c, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@234a08ea, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@729f8c5d, org.springframework.security.web.access.ExceptionTranslationFilter@4567dcbc, org.springframework.security.web.access.intercept.AuthorizationFilter@543d242e] 23:17:10 policy-api | [2024-03-09T23:14:58.014+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoint(s) beneath base path '' 23:17:10 policy-api | [2024-03-09T23:14:58.128+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] 23:17:10 policy-api | [2024-03-09T23:14:58.156+00:00|INFO|TomcatWebServer|main] Tomcat started on port(s): 6969 (http) with context path '/policy/api/v1' 23:17:10 policy-api | [2024-03-09T23:14:58.177+00:00|INFO|PolicyApiApplication|main] Started PolicyApiApplication in 12.32 seconds (process running for 12.972) 23:17:10 policy-api | [2024-03-09T23:15:17.038+00:00|INFO|[/policy/api/v1]|http-nio-6969-exec-2] Initializing Spring DispatcherServlet 'dispatcherServlet' 23:17:10 policy-api | [2024-03-09T23:15:17.038+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Initializing Servlet 'dispatcherServlet' 23:17:10 policy-api | [2024-03-09T23:15:17.039+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Completed initialization in 1 ms 23:17:10 policy-api | [2024-03-09T23:15:17.351+00:00|INFO|OrderedServiceImpl|http-nio-6969-exec-2] ***** OrderedServiceImpl implementers: 23:17:10 policy-api | [] 23:17:10 policy-apex-pdp | [2024-03-09T23:15:13.158+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1710026113156 23:17:10 policy-apex-pdp | [2024-03-09T23:15:13.160+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-dbe26ca2-4841-4841-92e1-919ee240973d-1, groupId=dbe26ca2-4841-4841-92e1-919ee240973d] Subscribed to topic(s): policy-pdp-pap 23:17:10 policy-apex-pdp | [2024-03-09T23:15:13.174+00:00|INFO|ServiceManager|main] service manager starting 23:17:10 policy-apex-pdp | [2024-03-09T23:15:13.174+00:00|INFO|ServiceManager|main] service manager starting topics 23:17:10 policy-apex-pdp | [2024-03-09T23:15:13.177+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=dbe26ca2-4841-4841-92e1-919ee240973d, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: starting 23:17:10 policy-apex-pdp | [2024-03-09T23:15:13.197+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 23:17:10 policy-apex-pdp | allow.auto.create.topics = true 23:17:10 policy-apex-pdp | auto.commit.interval.ms = 5000 23:17:10 policy-apex-pdp | auto.include.jmx.reporter = true 23:17:10 policy-apex-pdp | auto.offset.reset = latest 23:17:10 policy-apex-pdp | bootstrap.servers = [kafka:9092] 23:17:10 policy-apex-pdp | check.crcs = true 23:17:10 policy-apex-pdp | client.dns.lookup = use_all_dns_ips 23:17:10 policy-apex-pdp | client.id = consumer-dbe26ca2-4841-4841-92e1-919ee240973d-2 23:17:10 policy-apex-pdp | client.rack = 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:19.693950927Z level=info msg="Migration successfully executed" id="create alert_notification_state table v1" duration=1.40623ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:19.703716117Z level=info msg="Executing migration" id="add index alert_notification_state org_id & alert_id & notifier_id" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:19.705318112Z level=info msg="Migration successfully executed" id="add index alert_notification_state org_id & alert_id & notifier_id" duration=1.601485ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:19.739466377Z level=info msg="Executing migration" id="Add for to alert table" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:19.745069458Z level=info msg="Migration successfully executed" id="Add for to alert table" duration=5.604421ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:19.781990103Z level=info msg="Executing migration" id="Add column uid in alert_notification" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:19.788729668Z level=info msg="Migration successfully executed" id="Add column uid in alert_notification" duration=6.740115ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:19.890012199Z level=info msg="Executing migration" id="Update uid column values in alert_notification" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:19.890389527Z level=info msg="Migration successfully executed" id="Update uid column values in alert_notification" duration=378.148µs 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:19.959694109Z level=info msg="Executing migration" id="Add unique index alert_notification_org_id_uid" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:19.961185291Z level=info msg="Migration successfully executed" id="Add unique index alert_notification_org_id_uid" duration=1.494782ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:19.991586926Z level=info msg="Executing migration" id="Remove unique index org_id_name" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:19.992841293Z level=info msg="Migration successfully executed" id="Remove unique index org_id_name" duration=1.254767ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:20.012358653Z level=info msg="Executing migration" id="Add column secure_settings in alert_notification" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:20.018826542Z level=info msg="Migration successfully executed" id="Add column secure_settings in alert_notification" duration=6.461449ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:20.038972016Z level=info msg="Executing migration" id="alter alert.settings to mediumtext" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:20.039221651Z level=info msg="Migration successfully executed" id="alter alert.settings to mediumtext" duration=249.405µs 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:20.143735339Z level=info msg="Executing migration" id="Add non-unique index alert_notification_state_alert_id" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:20.14519172Z level=info msg="Migration successfully executed" id="Add non-unique index alert_notification_state_alert_id" duration=1.456502ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:20.22195323Z level=info msg="Executing migration" id="Add non-unique index alert_rule_tag_alert_id" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:20.223636627Z level=info msg="Migration successfully executed" id="Add non-unique index alert_rule_tag_alert_id" duration=1.683987ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:20.266919538Z level=info msg="Executing migration" id="Drop old annotation table v4" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:20.267113432Z level=info msg="Migration successfully executed" id="Drop old annotation table v4" duration=193.004µs 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:20.292581589Z level=info msg="Executing migration" id="create annotation table v5" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:20.294082632Z level=info msg="Migration successfully executed" id="create annotation table v5" duration=1.500373ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:20.362687277Z level=info msg="Executing migration" id="add index annotation 0 v3" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:20.364101798Z level=info msg="Migration successfully executed" id="add index annotation 0 v3" duration=1.415111ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:20.420344617Z level=info msg="Executing migration" id="add index annotation 1 v3" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:20.42185391Z level=info msg="Migration successfully executed" id="add index annotation 1 v3" duration=1.511513ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:20.479764155Z level=info msg="Executing migration" id="add index annotation 2 v3" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:20.481128844Z level=info msg="Migration successfully executed" id="add index annotation 2 v3" duration=1.364869ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:20.500118283Z level=info msg="Executing migration" id="add index annotation 3 v3" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:20.501716297Z level=info msg="Migration successfully executed" id="add index annotation 3 v3" duration=1.597364ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:20.533268605Z level=info msg="Executing migration" id="add index annotation 4 v3" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:20.534925171Z level=info msg="Migration successfully executed" id="add index annotation 4 v3" duration=1.655856ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:20.580657475Z level=info msg="Executing migration" id="Update annotation table charset" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:20.580699926Z level=info msg="Migration successfully executed" id="Update annotation table charset" duration=45.591µs 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:20.614195396Z level=info msg="Executing migration" id="Add column region_id to annotation table" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:20.620602994Z level=info msg="Migration successfully executed" id="Add column region_id to annotation table" duration=6.433219ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:20.692412788Z level=info msg="Executing migration" id="Drop category_id index" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:20.693960301Z level=info msg="Migration successfully executed" id="Drop category_id index" duration=1.548393ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:20.736287571Z level=info msg="Executing migration" id="Add column tags to annotation table" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:20.743120008Z level=info msg="Migration successfully executed" id="Add column tags to annotation table" duration=6.829857ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:20.768226158Z level=info msg="Executing migration" id="Create annotation_tag table v2" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:20.769082117Z level=info msg="Migration successfully executed" id="Create annotation_tag table v2" duration=855.589µs 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:20.808534545Z level=info msg="Executing migration" id="Add unique index annotation_tag.annotation_id_tag_id" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:20.815897843Z level=info msg="Migration successfully executed" id="Add unique index annotation_tag.annotation_id_tag_id" duration=7.362628ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:20.821545125Z level=info msg="Executing migration" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:20.822953065Z level=info msg="Migration successfully executed" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" duration=1.40792ms 23:17:10 policy-apex-pdp | connections.max.idle.ms = 540000 23:17:10 policy-apex-pdp | default.api.timeout.ms = 60000 23:17:10 policy-apex-pdp | enable.auto.commit = true 23:17:10 policy-apex-pdp | exclude.internal.topics = true 23:17:10 policy-apex-pdp | fetch.max.bytes = 52428800 23:17:10 policy-apex-pdp | fetch.max.wait.ms = 500 23:17:10 policy-apex-pdp | fetch.min.bytes = 1 23:17:10 policy-apex-pdp | group.id = dbe26ca2-4841-4841-92e1-919ee240973d 23:17:10 policy-apex-pdp | group.instance.id = null 23:17:10 policy-apex-pdp | heartbeat.interval.ms = 3000 23:17:10 policy-apex-pdp | interceptor.classes = [] 23:17:10 policy-apex-pdp | internal.leave.group.on.close = true 23:17:10 policy-apex-pdp | internal.throw.on.fetch.stable.offset.unsupported = false 23:17:10 policy-apex-pdp | isolation.level = read_uncommitted 23:17:10 policy-apex-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:17:10 policy-apex-pdp | max.partition.fetch.bytes = 1048576 23:17:10 policy-apex-pdp | max.poll.interval.ms = 300000 23:17:10 policy-apex-pdp | max.poll.records = 500 23:17:10 policy-apex-pdp | metadata.max.age.ms = 300000 23:17:10 policy-apex-pdp | metric.reporters = [] 23:17:10 policy-apex-pdp | metrics.num.samples = 2 23:17:10 policy-apex-pdp | metrics.recording.level = INFO 23:17:10 policy-apex-pdp | metrics.sample.window.ms = 30000 23:17:10 policy-apex-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 23:17:10 policy-apex-pdp | receive.buffer.bytes = 65536 23:17:10 policy-apex-pdp | reconnect.backoff.max.ms = 1000 23:17:10 policy-apex-pdp | reconnect.backoff.ms = 50 23:17:10 policy-apex-pdp | request.timeout.ms = 30000 23:17:10 policy-apex-pdp | retry.backoff.ms = 100 23:17:10 policy-apex-pdp | sasl.client.callback.handler.class = null 23:17:10 policy-apex-pdp | sasl.jaas.config = null 23:17:10 policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:17:10 policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 23:17:10 policy-apex-pdp | sasl.kerberos.service.name = null 23:17:10 policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 23:17:10 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 23:17:10 policy-apex-pdp | sasl.login.callback.handler.class = null 23:17:10 policy-apex-pdp | sasl.login.class = null 23:17:10 policy-apex-pdp | sasl.login.connect.timeout.ms = null 23:17:10 policy-apex-pdp | sasl.login.read.timeout.ms = null 23:17:10 policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 23:17:10 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 23:17:10 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 23:17:10 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 23:17:10 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 23:17:10 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 23:17:10 kafka | num.partitions = 1 23:17:10 kafka | num.recovery.threads.per.data.dir = 1 23:17:10 kafka | num.replica.alter.log.dirs.threads = null 23:17:10 kafka | num.replica.fetchers = 1 23:17:10 kafka | offset.metadata.max.bytes = 4096 23:17:10 kafka | offsets.commit.required.acks = -1 23:17:10 kafka | offsets.commit.timeout.ms = 5000 23:17:10 kafka | offsets.load.buffer.size = 5242880 23:17:10 kafka | offsets.retention.check.interval.ms = 600000 23:17:10 kafka | offsets.retention.minutes = 10080 23:17:10 kafka | offsets.topic.compression.codec = 0 23:17:10 kafka | offsets.topic.num.partitions = 50 23:17:10 kafka | offsets.topic.replication.factor = 1 23:17:10 kafka | offsets.topic.segment.bytes = 104857600 23:17:10 kafka | password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding 23:17:10 kafka | password.encoder.iterations = 4096 23:17:10 kafka | password.encoder.key.length = 128 23:17:10 kafka | password.encoder.keyfactory.algorithm = null 23:17:10 kafka | password.encoder.old.secret = null 23:17:10 kafka | password.encoder.secret = null 23:17:10 kafka | principal.builder.class = class org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder 23:17:10 kafka | process.roles = [] 23:17:10 kafka | producer.id.expiration.check.interval.ms = 600000 23:17:10 kafka | producer.id.expiration.ms = 86400000 23:17:10 kafka | producer.purgatory.purge.interval.requests = 1000 23:17:10 kafka | queued.max.request.bytes = -1 23:17:10 kafka | queued.max.requests = 500 23:17:10 kafka | quota.window.num = 11 23:17:10 kafka | quota.window.size.seconds = 1 23:17:10 kafka | remote.log.index.file.cache.total.size.bytes = 1073741824 23:17:10 kafka | remote.log.manager.task.interval.ms = 30000 23:17:10 kafka | remote.log.manager.task.retry.backoff.max.ms = 30000 23:17:10 kafka | remote.log.manager.task.retry.backoff.ms = 500 23:17:10 kafka | remote.log.manager.task.retry.jitter = 0.2 23:17:10 kafka | remote.log.manager.thread.pool.size = 10 23:17:10 kafka | remote.log.metadata.custom.metadata.max.bytes = 128 23:17:10 kafka | remote.log.metadata.manager.class.name = org.apache.kafka.server.log.remote.metadata.storage.TopicBasedRemoteLogMetadataManager 23:17:10 kafka | remote.log.metadata.manager.class.path = null 23:17:10 kafka | remote.log.metadata.manager.impl.prefix = rlmm.config. 23:17:10 kafka | remote.log.metadata.manager.listener.name = null 23:17:10 kafka | remote.log.reader.max.pending.tasks = 100 23:17:10 kafka | remote.log.reader.threads = 10 23:17:10 kafka | remote.log.storage.manager.class.name = null 23:17:10 kafka | remote.log.storage.manager.class.path = null 23:17:10 kafka | remote.log.storage.manager.impl.prefix = rsm.config. 23:17:10 kafka | remote.log.storage.system.enable = false 23:17:10 kafka | replica.fetch.backoff.ms = 1000 23:17:10 simulator | Policy simulator config file: /opt/app/policy/simulators/etc/mounted/simParameters.json 23:17:10 simulator | overriding logback.xml 23:17:10 simulator | 2024-03-09 23:14:05,851 INFO replacing 'HOST_NAME' with simulator in /opt/app/policy/simulators/etc/mounted/simParameters.json 23:17:10 simulator | 2024-03-09 23:14:05,919 INFO org.onap.policy.models.simulators starting 23:17:10 simulator | 2024-03-09 23:14:05,920 INFO org.onap.policy.models.simulators starting CDS gRPC Server Properties 23:17:10 simulator | 2024-03-09 23:14:06,127 INFO org.onap.policy.models.simulators starting org.onap.policy.simulators.AaiSimulatorJaxRs_RESOURCE_LOCATION 23:17:10 simulator | 2024-03-09 23:14:06,128 INFO org.onap.policy.models.simulators starting A&AI simulator 23:17:10 simulator | 2024-03-09 23:14:06,263 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-5c18016b==org.glassfish.jersey.servlet.ServletContainer@266aec30{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@45905bff{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@2a2c13a8{/,null,STOPPED}, connector=A&AI simulator@54a67a45{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-5c18016b==org.glassfish.jersey.servlet.ServletContainer@266aec30{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 23:17:10 simulator | 2024-03-09 23:14:06,274 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-5c18016b==org.glassfish.jersey.servlet.ServletContainer@266aec30{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@45905bff{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@2a2c13a8{/,null,STOPPED}, connector=A&AI simulator@54a67a45{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-5c18016b==org.glassfish.jersey.servlet.ServletContainer@266aec30{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:17:10 simulator | 2024-03-09 23:14:06,276 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-5c18016b==org.glassfish.jersey.servlet.ServletContainer@266aec30{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@45905bff{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@2a2c13a8{/,null,STOPPED}, connector=A&AI simulator@54a67a45{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=Thread[A&AI simulator-6666,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-5c18016b==org.glassfish.jersey.servlet.ServletContainer@266aec30{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:17:10 simulator | 2024-03-09 23:14:06,282 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.10+7-alpine-r0 23:17:10 simulator | 2024-03-09 23:14:06,356 INFO Session workerName=node0 23:17:10 simulator | 2024-03-09 23:14:06,939 INFO Using GSON for REST calls 23:17:10 simulator | 2024-03-09 23:14:07,019 INFO Started o.e.j.s.ServletContextHandler@2a2c13a8{/,null,AVAILABLE} 23:17:10 simulator | 2024-03-09 23:14:07,025 INFO Started A&AI simulator@54a67a45{HTTP/1.1, (http/1.1)}{0.0.0.0:6666} 23:17:10 simulator | 2024-03-09 23:14:07,031 INFO Started Server@45905bff{STARTING}[11.0.20,sto=0] @1717ms 23:17:10 simulator | 2024-03-09 23:14:07,032 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-5c18016b==org.glassfish.jersey.servlet.ServletContainer@266aec30{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@45905bff{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@2a2c13a8{/,null,AVAILABLE}, connector=A&AI simulator@54a67a45{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=Thread[A&AI simulator-6666,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-5c18016b==org.glassfish.jersey.servlet.ServletContainer@266aec30{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4244 ms. 23:17:10 simulator | 2024-03-09 23:14:07,038 INFO org.onap.policy.models.simulators starting SDNC simulator 23:17:10 simulator | 2024-03-09 23:14:07,041 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-7e22550a==org.glassfish.jersey.servlet.ServletContainer@615c5c2d{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@45e37a7e{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@62452cc9{/,null,STOPPED}, connector=SDNC simulator@78fbff54{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-7e22550a==org.glassfish.jersey.servlet.ServletContainer@615c5c2d{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 23:17:10 simulator | 2024-03-09 23:14:07,042 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-7e22550a==org.glassfish.jersey.servlet.ServletContainer@615c5c2d{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@45e37a7e{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@62452cc9{/,null,STOPPED}, connector=SDNC simulator@78fbff54{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-7e22550a==org.glassfish.jersey.servlet.ServletContainer@615c5c2d{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:17:10 simulator | 2024-03-09 23:14:07,043 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-7e22550a==org.glassfish.jersey.servlet.ServletContainer@615c5c2d{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@45e37a7e{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@62452cc9{/,null,STOPPED}, connector=SDNC simulator@78fbff54{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=Thread[SDNC simulator-6668,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-7e22550a==org.glassfish.jersey.servlet.ServletContainer@615c5c2d{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:17:10 simulator | 2024-03-09 23:14:07,045 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.10+7-alpine-r0 23:17:10 simulator | 2024-03-09 23:14:07,055 INFO Session workerName=node0 23:17:10 simulator | 2024-03-09 23:14:07,111 INFO Using GSON for REST calls 23:17:10 simulator | 2024-03-09 23:14:07,120 INFO Started o.e.j.s.ServletContextHandler@62452cc9{/,null,AVAILABLE} 23:17:10 simulator | 2024-03-09 23:14:07,121 INFO Started SDNC simulator@78fbff54{HTTP/1.1, (http/1.1)}{0.0.0.0:6668} 23:17:10 simulator | 2024-03-09 23:14:07,121 INFO Started Server@45e37a7e{STARTING}[11.0.20,sto=0] @1807ms 23:17:10 zookeeper_1 | [2024-03-09 23:14:08,231] INFO zookeeper.pathStats.enabled = false (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 23:17:10 zookeeper_1 | [2024-03-09 23:14:08,234] INFO The max bytes for all large requests are set to 104857600 (org.apache.zookeeper.server.ZooKeeperServer) 23:17:10 zookeeper_1 | [2024-03-09 23:14:08,234] INFO The large request threshold is set to -1 (org.apache.zookeeper.server.ZooKeeperServer) 23:17:10 zookeeper_1 | [2024-03-09 23:14:08,235] INFO zookeeper.enforce.auth.enabled = false (org.apache.zookeeper.server.AuthenticationHelper) 23:17:10 zookeeper_1 | [2024-03-09 23:14:08,235] INFO zookeeper.enforce.auth.schemes = [] (org.apache.zookeeper.server.AuthenticationHelper) 23:17:10 zookeeper_1 | [2024-03-09 23:14:08,235] INFO Created server with tickTime 2000 ms minSessionTimeout 4000 ms maxSessionTimeout 40000 ms clientPortListenBacklog -1 datadir /var/lib/zookeeper/log/version-2 snapdir /var/lib/zookeeper/data/version-2 (org.apache.zookeeper.server.ZooKeeperServer) 23:17:10 zookeeper_1 | [2024-03-09 23:14:08,260] INFO Logging initialized @544ms to org.eclipse.jetty.util.log.Slf4jLog (org.eclipse.jetty.util.log) 23:17:10 zookeeper_1 | [2024-03-09 23:14:08,349] WARN o.e.j.s.ServletContextHandler@5be1d0a4{/,null,STOPPED} contextPath ends with /* (org.eclipse.jetty.server.handler.ContextHandler) 23:17:10 zookeeper_1 | [2024-03-09 23:14:08,349] WARN Empty contextPath (org.eclipse.jetty.server.handler.ContextHandler) 23:17:10 zookeeper_1 | [2024-03-09 23:14:08,368] INFO jetty-9.4.53.v20231009; built: 2023-10-09T12:29:09.265Z; git: 27bde00a0b95a1d5bbee0eae7984f891d2d0f8c9; jvm 11.0.21+9-LTS (org.eclipse.jetty.server.Server) 23:17:10 zookeeper_1 | [2024-03-09 23:14:08,394] INFO DefaultSessionIdManager workerName=node0 (org.eclipse.jetty.server.session) 23:17:10 zookeeper_1 | [2024-03-09 23:14:08,394] INFO No SessionScavenger set, using defaults (org.eclipse.jetty.server.session) 23:17:10 zookeeper_1 | [2024-03-09 23:14:08,395] INFO node0 Scavenging every 600000ms (org.eclipse.jetty.server.session) 23:17:10 zookeeper_1 | [2024-03-09 23:14:08,400] WARN ServletContext@o.e.j.s.ServletContextHandler@5be1d0a4{/,null,STARTING} has uncovered http methods for path: /* (org.eclipse.jetty.security.SecurityHandler) 23:17:10 zookeeper_1 | [2024-03-09 23:14:08,408] INFO Started o.e.j.s.ServletContextHandler@5be1d0a4{/,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler) 23:17:10 zookeeper_1 | [2024-03-09 23:14:08,420] INFO Started ServerConnector@4f32a3ad{HTTP/1.1, (http/1.1)}{0.0.0.0:8080} (org.eclipse.jetty.server.AbstractConnector) 23:17:10 zookeeper_1 | [2024-03-09 23:14:08,420] INFO Started @705ms (org.eclipse.jetty.server.Server) 23:17:10 zookeeper_1 | [2024-03-09 23:14:08,420] INFO Started AdminServer on address 0.0.0.0, port 8080 and command URL /commands (org.apache.zookeeper.server.admin.JettyAdminServer) 23:17:10 zookeeper_1 | [2024-03-09 23:14:08,424] INFO Using org.apache.zookeeper.server.NIOServerCnxnFactory as server connection factory (org.apache.zookeeper.server.ServerCnxnFactory) 23:17:10 zookeeper_1 | [2024-03-09 23:14:08,425] WARN maxCnxns is not configured, using default value 0. (org.apache.zookeeper.server.ServerCnxnFactory) 23:17:10 zookeeper_1 | [2024-03-09 23:14:08,426] INFO Configuring NIO connection handler with 10s sessionless connection timeout, 2 selector thread(s), 16 worker threads, and 64 kB direct buffers. (org.apache.zookeeper.server.NIOServerCnxnFactory) 23:17:10 zookeeper_1 | [2024-03-09 23:14:08,428] INFO binding to port 0.0.0.0/0.0.0.0:2181 (org.apache.zookeeper.server.NIOServerCnxnFactory) 23:17:10 zookeeper_1 | [2024-03-09 23:14:08,443] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) 23:17:10 zookeeper_1 | [2024-03-09 23:14:08,443] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) 23:17:10 zookeeper_1 | [2024-03-09 23:14:08,444] INFO zookeeper.snapshotSizeFactor = 0.33 (org.apache.zookeeper.server.ZKDatabase) 23:17:10 zookeeper_1 | [2024-03-09 23:14:08,444] INFO zookeeper.commitLogCount=500 (org.apache.zookeeper.server.ZKDatabase) 23:17:10 zookeeper_1 | [2024-03-09 23:14:08,449] INFO zookeeper.snapshot.compression.method = CHECKED (org.apache.zookeeper.server.persistence.SnapStream) 23:17:10 zookeeper_1 | [2024-03-09 23:14:08,449] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) 23:17:10 zookeeper_1 | [2024-03-09 23:14:08,452] INFO Snapshot loaded in 8 ms, highest zxid is 0x0, digest is 1371985504 (org.apache.zookeeper.server.ZKDatabase) 23:17:10 zookeeper_1 | [2024-03-09 23:14:08,452] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) 23:17:10 zookeeper_1 | [2024-03-09 23:14:08,453] INFO Snapshot taken in 1 ms (org.apache.zookeeper.server.ZooKeeperServer) 23:17:10 zookeeper_1 | [2024-03-09 23:14:08,462] INFO zookeeper.request_throttler.shutdownTimeout = 10000 ms (org.apache.zookeeper.server.RequestThrottler) 23:17:10 zookeeper_1 | [2024-03-09 23:14:08,464] INFO PrepRequestProcessor (sid:0) started, reconfigEnabled=false (org.apache.zookeeper.server.PrepRequestProcessor) 23:17:10 zookeeper_1 | [2024-03-09 23:14:08,476] INFO Using checkIntervalMs=60000 maxPerMinute=10000 maxNeverUsedIntervalMs=0 (org.apache.zookeeper.server.ContainerManager) 23:17:10 zookeeper_1 | [2024-03-09 23:14:08,495] INFO ZooKeeper audit is disabled. (org.apache.zookeeper.audit.ZKAuditProvider) 23:17:10 zookeeper_1 | [2024-03-09 23:14:13,987] INFO Creating new log file: log.1 (org.apache.zookeeper.server.persistence.FileTxnLog) 23:17:10 policy-apex-pdp | sasl.mechanism = GSSAPI 23:17:10 policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 23:17:10 policy-apex-pdp | sasl.oauthbearer.expected.audience = null 23:17:10 policy-apex-pdp | sasl.oauthbearer.expected.issuer = null 23:17:10 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:17:10 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:17:10 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:17:10 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null 23:17:10 policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope 23:17:10 policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub 23:17:10 policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null 23:17:10 policy-apex-pdp | security.protocol = PLAINTEXT 23:17:10 policy-apex-pdp | security.providers = null 23:17:10 policy-apex-pdp | send.buffer.bytes = 131072 23:17:10 policy-apex-pdp | session.timeout.ms = 45000 23:17:10 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 23:17:10 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 23:17:10 policy-apex-pdp | ssl.cipher.suites = null 23:17:10 policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:17:10 policy-apex-pdp | ssl.endpoint.identification.algorithm = https 23:17:10 policy-apex-pdp | ssl.engine.factory.class = null 23:17:10 policy-apex-pdp | ssl.key.password = null 23:17:10 policy-apex-pdp | ssl.keymanager.algorithm = SunX509 23:17:10 policy-apex-pdp | ssl.keystore.certificate.chain = null 23:17:10 policy-apex-pdp | ssl.keystore.key = null 23:17:10 policy-apex-pdp | ssl.keystore.location = null 23:17:10 policy-apex-pdp | ssl.keystore.password = null 23:17:10 policy-apex-pdp | ssl.keystore.type = JKS 23:17:10 policy-apex-pdp | ssl.protocol = TLSv1.3 23:17:10 policy-apex-pdp | ssl.provider = null 23:17:10 policy-apex-pdp | ssl.secure.random.implementation = null 23:17:10 policy-apex-pdp | ssl.trustmanager.algorithm = PKIX 23:17:10 policy-apex-pdp | ssl.truststore.certificates = null 23:17:10 policy-apex-pdp | ssl.truststore.location = null 23:17:10 policy-apex-pdp | ssl.truststore.password = null 23:17:10 policy-apex-pdp | ssl.truststore.type = JKS 23:17:10 policy-apex-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:17:10 policy-apex-pdp | 23:17:10 policy-apex-pdp | [2024-03-09T23:15:13.206+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 23:17:10 policy-apex-pdp | [2024-03-09T23:15:13.206+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 23:17:10 policy-apex-pdp | [2024-03-09T23:15:13.206+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1710026113206 23:17:10 policy-apex-pdp | [2024-03-09T23:15:13.206+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-dbe26ca2-4841-4841-92e1-919ee240973d-2, groupId=dbe26ca2-4841-4841-92e1-919ee240973d] Subscribed to topic(s): policy-pdp-pap 23:17:10 policy-apex-pdp | [2024-03-09T23:15:13.207+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=80407e59-d832-4bab-8846-6f714a30495b, alive=false, publisher=null]]: starting 23:17:10 policy-apex-pdp | [2024-03-09T23:15:13.219+00:00|INFO|ProducerConfig|main] ProducerConfig values: 23:17:10 policy-apex-pdp | acks = -1 23:17:10 policy-apex-pdp | auto.include.jmx.reporter = true 23:17:10 policy-apex-pdp | batch.size = 16384 23:17:10 policy-apex-pdp | bootstrap.servers = [kafka:9092] 23:17:10 policy-apex-pdp | buffer.memory = 33554432 23:17:10 policy-apex-pdp | client.dns.lookup = use_all_dns_ips 23:17:10 policy-apex-pdp | client.id = producer-1 23:17:10 policy-apex-pdp | compression.type = none 23:17:10 policy-apex-pdp | connections.max.idle.ms = 540000 23:17:10 policy-apex-pdp | delivery.timeout.ms = 120000 23:17:10 policy-apex-pdp | enable.idempotence = true 23:17:10 policy-apex-pdp | interceptor.classes = [] 23:17:10 policy-apex-pdp | key.serializer = class org.apache.kafka.common.serialization.StringSerializer 23:17:10 policy-apex-pdp | linger.ms = 0 23:17:10 policy-apex-pdp | max.block.ms = 60000 23:17:10 policy-apex-pdp | max.in.flight.requests.per.connection = 5 23:17:10 policy-apex-pdp | max.request.size = 1048576 23:17:10 policy-apex-pdp | metadata.max.age.ms = 300000 23:17:10 policy-apex-pdp | metadata.max.idle.ms = 300000 23:17:10 policy-apex-pdp | metric.reporters = [] 23:17:10 policy-apex-pdp | metrics.num.samples = 2 23:17:10 policy-apex-pdp | metrics.recording.level = INFO 23:17:10 policy-apex-pdp | metrics.sample.window.ms = 30000 23:17:10 policy-apex-pdp | partitioner.adaptive.partitioning.enable = true 23:17:10 policy-apex-pdp | partitioner.availability.timeout.ms = 0 23:17:10 policy-apex-pdp | partitioner.class = null 23:17:10 policy-apex-pdp | partitioner.ignore.keys = false 23:17:10 policy-apex-pdp | receive.buffer.bytes = 32768 23:17:10 policy-apex-pdp | reconnect.backoff.max.ms = 1000 23:17:10 policy-apex-pdp | reconnect.backoff.ms = 50 23:17:10 simulator | 2024-03-09 23:14:07,122 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-7e22550a==org.glassfish.jersey.servlet.ServletContainer@615c5c2d{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@45e37a7e{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@62452cc9{/,null,AVAILABLE}, connector=SDNC simulator@78fbff54{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=Thread[SDNC simulator-6668,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-7e22550a==org.glassfish.jersey.servlet.ServletContainer@615c5c2d{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4921 ms. 23:17:10 simulator | 2024-03-09 23:14:07,123 INFO org.onap.policy.models.simulators starting SO simulator 23:17:10 simulator | 2024-03-09 23:14:07,125 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-1021f6c9==org.glassfish.jersey.servlet.ServletContainer@fe7342f2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@7516e4e5{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@488eb7f2{/,null,STOPPED}, connector=SO simulator@5a7005d{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-1021f6c9==org.glassfish.jersey.servlet.ServletContainer@fe7342f2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 23:17:10 simulator | 2024-03-09 23:14:07,125 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-1021f6c9==org.glassfish.jersey.servlet.ServletContainer@fe7342f2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@7516e4e5{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@488eb7f2{/,null,STOPPED}, connector=SO simulator@5a7005d{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-1021f6c9==org.glassfish.jersey.servlet.ServletContainer@fe7342f2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:17:10 simulator | 2024-03-09 23:14:07,126 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-1021f6c9==org.glassfish.jersey.servlet.ServletContainer@fe7342f2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@7516e4e5{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@488eb7f2{/,null,STOPPED}, connector=SO simulator@5a7005d{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=Thread[SO simulator-6669,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-1021f6c9==org.glassfish.jersey.servlet.ServletContainer@fe7342f2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:17:10 simulator | 2024-03-09 23:14:07,126 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.10+7-alpine-r0 23:17:10 simulator | 2024-03-09 23:14:07,129 INFO Session workerName=node0 23:17:10 simulator | 2024-03-09 23:14:07,190 INFO Using GSON for REST calls 23:17:10 simulator | 2024-03-09 23:14:07,202 INFO Started o.e.j.s.ServletContextHandler@488eb7f2{/,null,AVAILABLE} 23:17:10 simulator | 2024-03-09 23:14:07,203 INFO Started SO simulator@5a7005d{HTTP/1.1, (http/1.1)}{0.0.0.0:6669} 23:17:10 simulator | 2024-03-09 23:14:07,203 INFO Started Server@7516e4e5{STARTING}[11.0.20,sto=0] @1889ms 23:17:10 simulator | 2024-03-09 23:14:07,203 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-1021f6c9==org.glassfish.jersey.servlet.ServletContainer@fe7342f2{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@7516e4e5{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@488eb7f2{/,null,AVAILABLE}, connector=SO simulator@5a7005d{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=Thread[SO simulator-6669,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-1021f6c9==org.glassfish.jersey.servlet.ServletContainer@fe7342f2{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4923 ms. 23:17:10 simulator | 2024-03-09 23:14:07,204 INFO org.onap.policy.models.simulators starting VFC simulator 23:17:10 simulator | 2024-03-09 23:14:07,207 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-3e7634b9==org.glassfish.jersey.servlet.ServletContainer@ca975b37{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@6f0b0a5e{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@6035b93b{/,null,STOPPED}, connector=VFC simulator@4189d70b{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-3e7634b9==org.glassfish.jersey.servlet.ServletContainer@ca975b37{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 23:17:10 simulator | 2024-03-09 23:14:07,207 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-3e7634b9==org.glassfish.jersey.servlet.ServletContainer@ca975b37{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@6f0b0a5e{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@6035b93b{/,null,STOPPED}, connector=VFC simulator@4189d70b{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-3e7634b9==org.glassfish.jersey.servlet.ServletContainer@ca975b37{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:17:10 simulator | 2024-03-09 23:14:07,208 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-3e7634b9==org.glassfish.jersey.servlet.ServletContainer@ca975b37{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@6f0b0a5e{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@6035b93b{/,null,STOPPED}, connector=VFC simulator@4189d70b{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=Thread[VFC simulator-6670,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-3e7634b9==org.glassfish.jersey.servlet.ServletContainer@ca975b37{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:17:10 simulator | 2024-03-09 23:14:07,208 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.10+7-alpine-r0 23:17:10 simulator | 2024-03-09 23:14:07,210 INFO Session workerName=node0 23:17:10 simulator | 2024-03-09 23:14:07,257 INFO Using GSON for REST calls 23:17:10 simulator | 2024-03-09 23:14:07,265 INFO Started o.e.j.s.ServletContextHandler@6035b93b{/,null,AVAILABLE} 23:17:10 simulator | 2024-03-09 23:14:07,266 INFO Started VFC simulator@4189d70b{HTTP/1.1, (http/1.1)}{0.0.0.0:6670} 23:17:10 simulator | 2024-03-09 23:14:07,267 INFO Started Server@6f0b0a5e{STARTING}[11.0.20,sto=0] @1953ms 23:17:10 simulator | 2024-03-09 23:14:07,267 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-3e7634b9==org.glassfish.jersey.servlet.ServletContainer@ca975b37{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@6f0b0a5e{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@6035b93b{/,null,AVAILABLE}, connector=VFC simulator@4189d70b{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=Thread[VFC simulator-6670,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-3e7634b9==org.glassfish.jersey.servlet.ServletContainer@ca975b37{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4941 ms. 23:17:10 simulator | 2024-03-09 23:14:07,267 INFO org.onap.policy.models.simulators started 23:17:10 policy-db-migrator | Waiting for mariadb port 3306... 23:17:10 policy-db-migrator | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused 23:17:10 policy-db-migrator | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused 23:17:10 policy-db-migrator | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused 23:17:10 policy-db-migrator | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused 23:17:10 policy-db-migrator | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused 23:17:10 policy-db-migrator | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused 23:17:10 policy-db-migrator | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused 23:17:10 policy-db-migrator | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused 23:17:10 policy-db-migrator | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused 23:17:10 policy-db-migrator | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused 23:17:10 policy-db-migrator | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused 23:17:10 policy-db-migrator | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused 23:17:10 policy-db-migrator | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused 23:17:10 policy-db-migrator | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused 23:17:10 policy-db-migrator | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused 23:17:10 policy-db-migrator | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused 23:17:10 policy-db-migrator | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused 23:17:10 policy-db-migrator | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused 23:17:10 policy-db-migrator | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused 23:17:10 policy-db-migrator | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused 23:17:10 policy-db-migrator | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused 23:17:10 policy-db-migrator | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused 23:17:10 policy-db-migrator | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused 23:17:10 policy-db-migrator | nc: connect to mariadb (172.17.0.3) port 3306 (tcp) failed: Connection refused 23:17:10 policy-db-migrator | Connection to mariadb (172.17.0.3) 3306 port [tcp/mysql] succeeded! 23:17:10 policy-db-migrator | 321 blocks 23:17:10 policy-db-migrator | Preparing upgrade release version: 0800 23:17:10 policy-db-migrator | Preparing upgrade release version: 0900 23:17:10 policy-db-migrator | Preparing upgrade release version: 1000 23:17:10 policy-db-migrator | Preparing upgrade release version: 1100 23:17:10 policy-db-migrator | Preparing upgrade release version: 1200 23:17:10 policy-db-migrator | Preparing upgrade release version: 1300 23:17:10 policy-db-migrator | Done 23:17:10 policy-db-migrator | name version 23:17:10 policy-db-migrator | policyadmin 0 23:17:10 policy-db-migrator | policyadmin: upgrade available: 0 -> 1300 23:17:10 policy-db-migrator | upgrade: 0 -> 1300 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | > upgrade 0100-jpapdpgroup_properties.sql 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpgroup_properties (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PROPERTIES VARCHAR(255) DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) DEFAULT NULL) 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpstatistics_enginestats (AVERAGEEXECUTIONTIME DOUBLE DEFAULT NULL, ENGINEID VARCHAR(255) DEFAULT NULL, ENGINETIMESTAMP BIGINT DEFAULT NULL, ENGINEWORKERSTATE INT DEFAULT NULL, EVENTCOUNT BIGINT DEFAULT NULL, LASTENTERTIME BIGINT DEFAULT NULL, LASTEXECUTIONTIME BIGINT DEFAULT NULL, LASTSTART BIGINT DEFAULT NULL, UPTIME BIGINT DEFAULT NULL, timeStamp datetime DEFAULT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL) 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | > upgrade 0120-jpapdpsubgroup_policies.sql 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_policies (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL) 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | > upgrade 0130-jpapdpsubgroup_properties.sql 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_properties (parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL, PROPERTIES VARCHAR(255) DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) DEFAULT NULL) 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | > upgrade 0140-jpapdpsubgroup_supportedpolicytypes.sql 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_supportedpolicytypes (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL) 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | > upgrade 0150-jpatoscacapabilityassignment_attributes.sql 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_attributes (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, ATTRIBUTES LONGTEXT DEFAULT NULL, ATTRIBUTES_KEY VARCHAR(255) DEFAULT NULL) 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | > upgrade 0160-jpatoscacapabilityassignment_metadata.sql 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | > upgrade 0170-jpatoscacapabilityassignment_occurrences.sql 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_occurrences (name VARCHAR(120) NULL, version VARCHAR(20) NULL, OCCURRENCES INT DEFAULT NULL) 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | > upgrade 0180-jpatoscacapabilityassignment_properties.sql 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | > upgrade 0190-jpatoscacapabilitytype_metadata.sql 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilitytype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | > upgrade 0200-jpatoscacapabilitytype_properties.sql 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilitytype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | > upgrade 0210-jpatoscadatatype_constraints.sql 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_constraints (name VARCHAR(120) NULL, version VARCHAR(20) NULL, CONSTRAINTS VARCHAR(255) NULL) 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | > upgrade 0220-jpatoscadatatype_metadata.sql 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | > upgrade 0230-jpatoscadatatype_properties.sql 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:20.853456211Z level=info msg="Executing migration" id="Rename table annotation_tag to annotation_tag_v2 - v2" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:20.868080596Z level=info msg="Migration successfully executed" id="Rename table annotation_tag to annotation_tag_v2 - v2" duration=14.626915ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:20.875103637Z level=info msg="Executing migration" id="Create annotation_tag table v3" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:20.876421055Z level=info msg="Migration successfully executed" id="Create annotation_tag table v3" duration=1.316678ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:20.879848359Z level=info msg="Executing migration" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:20.881601896Z level=info msg="Migration successfully executed" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" duration=1.753827ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:20.886342588Z level=info msg="Executing migration" id="copy annotation_tag v2 to v3" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:20.886700096Z level=info msg="Migration successfully executed" id="copy annotation_tag v2 to v3" duration=358.768µs 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:20.898730225Z level=info msg="Executing migration" id="drop table annotation_tag_v2" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:20.899786827Z level=info msg="Migration successfully executed" id="drop table annotation_tag_v2" duration=1.059873ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:20.905063981Z level=info msg="Executing migration" id="Update alert annotations and set TEXT to empty" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:20.905343397Z level=info msg="Migration successfully executed" id="Update alert annotations and set TEXT to empty" duration=280.406µs 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:20.909825113Z level=info msg="Executing migration" id="Add created time to annotation table" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:20.913881601Z level=info msg="Migration successfully executed" id="Add created time to annotation table" duration=4.056568ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:20.917519049Z level=info msg="Executing migration" id="Add updated time to annotation table" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:20.921450353Z level=info msg="Migration successfully executed" id="Add updated time to annotation table" duration=3.930904ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:20.928226489Z level=info msg="Executing migration" id="Add index for created in annotation table" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:20.929066777Z level=info msg="Migration successfully executed" id="Add index for created in annotation table" duration=842.828µs 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:20.931919739Z level=info msg="Executing migration" id="Add index for updated in annotation table" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:20.932736946Z level=info msg="Migration successfully executed" id="Add index for updated in annotation table" duration=816.777µs 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:20.935668339Z level=info msg="Executing migration" id="Convert existing annotations from seconds to milliseconds" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:20.935878734Z level=info msg="Migration successfully executed" id="Convert existing annotations from seconds to milliseconds" duration=208.624µs 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:20.941723449Z level=info msg="Executing migration" id="Add epoch_end column" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:20.946946632Z level=info msg="Migration successfully executed" id="Add epoch_end column" duration=5.221443ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:20.952569833Z level=info msg="Executing migration" id="Add index for epoch_end" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:20.953593945Z level=info msg="Migration successfully executed" id="Add index for epoch_end" duration=1.024602ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:20.957774414Z level=info msg="Executing migration" id="Make epoch_end the same as epoch" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:20.957945698Z level=info msg="Migration successfully executed" id="Make epoch_end the same as epoch" duration=170.914µs 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:20.963839225Z level=info msg="Executing migration" id="Move region to single row" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:20.964490659Z level=info msg="Migration successfully executed" id="Move region to single row" duration=651.644µs 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:20.969737572Z level=info msg="Executing migration" id="Remove index org_id_epoch from annotation table" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:20.970612491Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch from annotation table" duration=878.669µs 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:20.972791778Z level=info msg="Executing migration" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:20.974138627Z level=info msg="Migration successfully executed" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" duration=1.331058ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:20.980008453Z level=info msg="Executing migration" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:20.981534325Z level=info msg="Migration successfully executed" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" duration=1.525782ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:20.985423449Z level=info msg="Executing migration" id="Add index for org_id_epoch_end_epoch on annotation table" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:20.986297828Z level=info msg="Migration successfully executed" id="Add index for org_id_epoch_end_epoch on annotation table" duration=872.719µs 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:20.989073078Z level=info msg="Executing migration" id="Remove index org_id_epoch_epoch_end from annotation table" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:20.989876885Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch_epoch_end from annotation table" duration=802.157µs 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:20.995409964Z level=info msg="Executing migration" id="Add index for alert_id on annotation table" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:20.996244482Z level=info msg="Migration successfully executed" id="Add index for alert_id on annotation table" duration=832.238µs 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:20.999371839Z level=info msg="Executing migration" id="Increase tags column to length 4096" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:20.999495352Z level=info msg="Migration successfully executed" id="Increase tags column to length 4096" duration=125.203µs 23:17:10 policy-apex-pdp | request.timeout.ms = 30000 23:17:10 policy-apex-pdp | retries = 2147483647 23:17:10 policy-apex-pdp | retry.backoff.ms = 100 23:17:10 policy-apex-pdp | sasl.client.callback.handler.class = null 23:17:10 policy-apex-pdp | sasl.jaas.config = null 23:17:10 policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:17:10 policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 23:17:10 policy-apex-pdp | sasl.kerberos.service.name = null 23:17:10 policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 23:17:10 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 23:17:10 policy-apex-pdp | sasl.login.callback.handler.class = null 23:17:10 policy-apex-pdp | sasl.login.class = null 23:17:10 policy-apex-pdp | sasl.login.connect.timeout.ms = null 23:17:10 policy-apex-pdp | sasl.login.read.timeout.ms = null 23:17:10 policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 23:17:10 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 23:17:10 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 23:17:10 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 23:17:10 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 23:17:10 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 23:17:10 policy-apex-pdp | sasl.mechanism = GSSAPI 23:17:10 policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 23:17:10 policy-apex-pdp | sasl.oauthbearer.expected.audience = null 23:17:10 policy-apex-pdp | sasl.oauthbearer.expected.issuer = null 23:17:10 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:17:10 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:17:10 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:17:10 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null 23:17:10 policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope 23:17:10 policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub 23:17:10 policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null 23:17:10 policy-pap | Waiting for mariadb port 3306... 23:17:10 policy-pap | mariadb (172.17.0.3:3306) open 23:17:10 policy-pap | Waiting for kafka port 9092... 23:17:10 policy-pap | kafka (172.17.0.8:9092) open 23:17:10 policy-pap | Waiting for api port 6969... 23:17:10 policy-pap | api (172.17.0.9:6969) open 23:17:10 policy-pap | Policy pap config file: /opt/app/policy/pap/etc/papParameters.yaml 23:17:10 policy-pap | PDP group configuration file: /opt/app/policy/pap/etc/mounted/groups.json 23:17:10 policy-pap | 23:17:10 policy-pap | . ____ _ __ _ _ 23:17:10 policy-pap | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ 23:17:10 policy-pap | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ 23:17:10 policy-pap | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) 23:17:10 policy-pap | ' |____| .__|_| |_|_| |_\__, | / / / / 23:17:10 policy-pap | =========|_|==============|___/=/_/_/_/ 23:17:10 policy-pap | :: Spring Boot :: (v3.1.8) 23:17:10 policy-pap | 23:17:10 policy-pap | [2024-03-09T23:15:00.749+00:00|INFO|PolicyPapApplication|main] Starting PolicyPapApplication using Java 17.0.10 with PID 59 (/app/pap.jar started by policy in /opt/app/policy/pap/bin) 23:17:10 policy-pap | [2024-03-09T23:15:00.752+00:00|INFO|PolicyPapApplication|main] No active profile set, falling back to 1 default profile: "default" 23:17:10 policy-pap | [2024-03-09T23:15:02.772+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. 23:17:10 policy-pap | [2024-03-09T23:15:02.903+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 117 ms. Found 7 JPA repository interfaces. 23:17:10 policy-pap | [2024-03-09T23:15:03.346+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.exception.ServiceExceptionHandler 23:17:10 policy-pap | [2024-03-09T23:15:03.347+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.exception.ServiceExceptionHandler 23:17:10 policy-pap | [2024-03-09T23:15:04.073+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port(s): 6969 (http) 23:17:10 policy-pap | [2024-03-09T23:15:04.085+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] 23:17:10 policy-pap | [2024-03-09T23:15:04.087+00:00|INFO|StandardService|main] Starting service [Tomcat] 23:17:10 policy-pap | [2024-03-09T23:15:04.087+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.18] 23:17:10 policy-pap | [2024-03-09T23:15:04.210+00:00|INFO|[/policy/pap/v1]|main] Initializing Spring embedded WebApplicationContext 23:17:10 policy-pap | [2024-03-09T23:15:04.210+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 3369 ms 23:17:10 policy-pap | [2024-03-09T23:15:04.672+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] 23:17:10 policy-pap | [2024-03-09T23:15:04.757+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.3.0.CR1 23:17:10 policy-pap | [2024-03-09T23:15:04.761+00:00|INFO|Environment|main] HHH000406: Using bytecode reflection optimizer 23:17:10 policy-pap | [2024-03-09T23:15:04.814+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled 23:17:10 policy-pap | [2024-03-09T23:15:05.272+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer 23:17:10 policy-pap | [2024-03-09T23:15:05.294+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... 23:17:10 policy-pap | [2024-03-09T23:15:05.404+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.mariadb.jdbc.Connection@2def7a7a 23:17:10 policy-pap | [2024-03-09T23:15:05.407+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. 23:17:10 policy-pap | [2024-03-09T23:15:07.528+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) 23:17:10 policy-pap | [2024-03-09T23:15:07.532+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' 23:17:10 policy-pap | [2024-03-09T23:15:08.094+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PdpGroupRepository 23:17:10 policy-pap | [2024-03-09T23:15:08.516+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PolicyStatusRepository 23:17:10 policy-pap | [2024-03-09T23:15:08.636+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PolicyAuditRepository 23:17:10 policy-pap | [2024-03-09T23:15:08.906+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 23:17:10 policy-pap | allow.auto.create.topics = true 23:17:10 policy-pap | auto.commit.interval.ms = 5000 23:17:10 policy-pap | auto.include.jmx.reporter = true 23:17:10 policy-pap | auto.offset.reset = latest 23:17:10 policy-pap | bootstrap.servers = [kafka:9092] 23:17:10 prometheus | ts=2024-03-09T23:14:02.063Z caller=main.go:564 level=info msg="No time or size retention was set so using the default time retention" duration=15d 23:17:10 prometheus | ts=2024-03-09T23:14:02.063Z caller=main.go:608 level=info msg="Starting Prometheus Server" mode=server version="(version=2.50.1, branch=HEAD, revision=8c9b0285360a0b6288d76214a75ce3025bce4050)" 23:17:10 prometheus | ts=2024-03-09T23:14:02.063Z caller=main.go:613 level=info build_context="(go=go1.21.7, platform=linux/amd64, user=root@6213bb3ee580, date=20240226-11:36:26, tags=netgo,builtinassets,stringlabels)" 23:17:10 prometheus | ts=2024-03-09T23:14:02.064Z caller=main.go:614 level=info host_details="(Linux 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 prometheus (none))" 23:17:10 prometheus | ts=2024-03-09T23:14:02.064Z caller=main.go:615 level=info fd_limits="(soft=1048576, hard=1048576)" 23:17:10 prometheus | ts=2024-03-09T23:14:02.064Z caller=main.go:616 level=info vm_limits="(soft=unlimited, hard=unlimited)" 23:17:10 prometheus | ts=2024-03-09T23:14:02.066Z caller=web.go:565 level=info component=web msg="Start listening for connections" address=0.0.0.0:9090 23:17:10 prometheus | ts=2024-03-09T23:14:02.071Z caller=main.go:1118 level=info msg="Starting TSDB ..." 23:17:10 prometheus | ts=2024-03-09T23:14:02.074Z caller=tls_config.go:313 level=info component=web msg="Listening on" address=[::]:9090 23:17:10 prometheus | ts=2024-03-09T23:14:02.074Z caller=tls_config.go:316 level=info component=web msg="TLS is disabled." http2=false address=[::]:9090 23:17:10 prometheus | ts=2024-03-09T23:14:02.078Z caller=head.go:610 level=info component=tsdb msg="Replaying on-disk memory mappable chunks if any" 23:17:10 prometheus | ts=2024-03-09T23:14:02.078Z caller=head.go:692 level=info component=tsdb msg="On-disk memory mappable chunks replay completed" duration=2.24µs 23:17:10 prometheus | ts=2024-03-09T23:14:02.078Z caller=head.go:700 level=info component=tsdb msg="Replaying WAL, this may take a while" 23:17:10 prometheus | ts=2024-03-09T23:14:02.078Z caller=head.go:771 level=info component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0 23:17:10 prometheus | ts=2024-03-09T23:14:02.078Z caller=head.go:808 level=info component=tsdb msg="WAL replay completed" checkpoint_replay_duration=34.231µs wal_replay_duration=295.334µs wbl_replay_duration=170ns total_replay_duration=357.065µs 23:17:10 prometheus | ts=2024-03-09T23:14:02.081Z caller=main.go:1139 level=info fs_type=EXT4_SUPER_MAGIC 23:17:10 prometheus | ts=2024-03-09T23:14:02.081Z caller=main.go:1142 level=info msg="TSDB started" 23:17:10 prometheus | ts=2024-03-09T23:14:02.081Z caller=main.go:1324 level=info msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml 23:17:10 prometheus | ts=2024-03-09T23:14:02.082Z caller=main.go:1361 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=983.576µs db_storage=1.1µs remote_storage=2.04µs web_handler=420ns query_engine=840ns scrape=238.554µs scrape_sd=128.822µs notify=34.27µs notify_sd=8.611µs rules=1.79µs tracing=5.34µs 23:17:10 prometheus | ts=2024-03-09T23:14:02.082Z caller=main.go:1103 level=info msg="Server is ready to receive web requests." 23:17:10 prometheus | ts=2024-03-09T23:14:02.082Z caller=manager.go:146 level=info component="rule manager" msg="Starting rule manager..." 23:17:10 policy-pap | check.crcs = true 23:17:10 policy-pap | client.dns.lookup = use_all_dns_ips 23:17:10 policy-pap | client.id = consumer-99d6a240-42a3-48b7-904b-df55de280eab-1 23:17:10 policy-pap | client.rack = 23:17:10 policy-pap | connections.max.idle.ms = 540000 23:17:10 policy-pap | default.api.timeout.ms = 60000 23:17:10 policy-pap | enable.auto.commit = true 23:17:10 policy-pap | exclude.internal.topics = true 23:17:10 policy-pap | fetch.max.bytes = 52428800 23:17:10 policy-pap | fetch.max.wait.ms = 500 23:17:10 policy-pap | fetch.min.bytes = 1 23:17:10 policy-pap | group.id = 99d6a240-42a3-48b7-904b-df55de280eab 23:17:10 policy-pap | group.instance.id = null 23:17:10 policy-pap | heartbeat.interval.ms = 3000 23:17:10 policy-pap | interceptor.classes = [] 23:17:10 policy-pap | internal.leave.group.on.close = true 23:17:10 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 23:17:10 policy-pap | isolation.level = read_uncommitted 23:17:10 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:17:10 policy-pap | max.partition.fetch.bytes = 1048576 23:17:10 policy-pap | max.poll.interval.ms = 300000 23:17:10 policy-pap | max.poll.records = 500 23:17:10 policy-pap | metadata.max.age.ms = 300000 23:17:10 policy-pap | metric.reporters = [] 23:17:10 policy-pap | metrics.num.samples = 2 23:17:10 policy-pap | metrics.recording.level = INFO 23:17:10 policy-pap | metrics.sample.window.ms = 30000 23:17:10 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 23:17:10 policy-pap | receive.buffer.bytes = 65536 23:17:10 policy-pap | reconnect.backoff.max.ms = 1000 23:17:10 policy-pap | reconnect.backoff.ms = 50 23:17:10 policy-pap | request.timeout.ms = 30000 23:17:10 policy-pap | retry.backoff.ms = 100 23:17:10 policy-pap | sasl.client.callback.handler.class = null 23:17:10 policy-pap | sasl.jaas.config = null 23:17:10 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:17:10 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 23:17:10 policy-pap | sasl.kerberos.service.name = null 23:17:10 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 23:17:10 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 23:17:10 policy-pap | sasl.login.callback.handler.class = null 23:17:10 policy-pap | sasl.login.class = null 23:17:10 policy-pap | sasl.login.connect.timeout.ms = null 23:17:10 policy-pap | sasl.login.read.timeout.ms = null 23:17:10 policy-pap | sasl.login.refresh.buffer.seconds = 300 23:17:10 policy-pap | sasl.login.refresh.min.period.seconds = 60 23:17:10 policy-pap | sasl.login.refresh.window.factor = 0.8 23:17:10 policy-pap | sasl.login.refresh.window.jitter = 0.05 23:17:10 policy-pap | sasl.login.retry.backoff.max.ms = 10000 23:17:10 policy-pap | sasl.login.retry.backoff.ms = 100 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | > upgrade 0240-jpatoscanodetemplate_metadata.sql 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetemplate_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | > upgrade 0250-jpatoscanodetemplate_properties.sql 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetemplate_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | > upgrade 0260-jpatoscanodetype_metadata.sql 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | > upgrade 0270-jpatoscanodetype_properties.sql 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | > upgrade 0280-jpatoscapolicy_metadata.sql 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:17:10 policy-db-migrator | -------------- 23:17:10 kafka | replica.fetch.max.bytes = 1048576 23:17:10 kafka | replica.fetch.min.bytes = 1 23:17:10 kafka | replica.fetch.response.max.bytes = 10485760 23:17:10 kafka | replica.fetch.wait.max.ms = 500 23:17:10 kafka | replica.high.watermark.checkpoint.interval.ms = 5000 23:17:10 kafka | replica.lag.time.max.ms = 30000 23:17:10 kafka | replica.selector.class = null 23:17:10 kafka | replica.socket.receive.buffer.bytes = 65536 23:17:10 kafka | replica.socket.timeout.ms = 30000 23:17:10 kafka | replication.quota.window.num = 11 23:17:10 kafka | replication.quota.window.size.seconds = 1 23:17:10 kafka | request.timeout.ms = 30000 23:17:10 kafka | reserved.broker.max.id = 1000 23:17:10 kafka | sasl.client.callback.handler.class = null 23:17:10 kafka | sasl.enabled.mechanisms = [GSSAPI] 23:17:10 kafka | sasl.jaas.config = null 23:17:10 kafka | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:17:10 kafka | sasl.kerberos.min.time.before.relogin = 60000 23:17:10 kafka | sasl.kerberos.principal.to.local.rules = [DEFAULT] 23:17:10 kafka | sasl.kerberos.service.name = null 23:17:10 kafka | sasl.kerberos.ticket.renew.jitter = 0.05 23:17:10 kafka | sasl.kerberos.ticket.renew.window.factor = 0.8 23:17:10 kafka | sasl.login.callback.handler.class = null 23:17:10 kafka | sasl.login.class = null 23:17:10 kafka | sasl.login.connect.timeout.ms = null 23:17:10 kafka | sasl.login.read.timeout.ms = null 23:17:10 kafka | sasl.login.refresh.buffer.seconds = 300 23:17:10 kafka | sasl.login.refresh.min.period.seconds = 60 23:17:10 kafka | sasl.login.refresh.window.factor = 0.8 23:17:10 kafka | sasl.login.refresh.window.jitter = 0.05 23:17:10 kafka | sasl.login.retry.backoff.max.ms = 10000 23:17:10 kafka | sasl.login.retry.backoff.ms = 100 23:17:10 kafka | sasl.mechanism.controller.protocol = GSSAPI 23:17:10 kafka | sasl.mechanism.inter.broker.protocol = GSSAPI 23:17:10 kafka | sasl.oauthbearer.clock.skew.seconds = 30 23:17:10 kafka | sasl.oauthbearer.expected.audience = null 23:17:10 kafka | sasl.oauthbearer.expected.issuer = null 23:17:10 kafka | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:17:10 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:17:10 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:17:10 kafka | sasl.oauthbearer.jwks.endpoint.url = null 23:17:10 kafka | sasl.oauthbearer.scope.claim.name = scope 23:17:10 kafka | sasl.oauthbearer.sub.claim.name = sub 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | > upgrade 0290-jpatoscapolicy_properties.sql 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | > upgrade 0300-jpatoscapolicy_targets.sql 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_targets (name VARCHAR(120) NULL, version VARCHAR(20) NULL) 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | > upgrade 0310-jpatoscapolicytype_metadata.sql 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | > upgrade 0320-jpatoscapolicytype_properties.sql 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | > upgrade 0330-jpatoscapolicytype_targets.sql 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_targets (name VARCHAR(120) NULL, version VARCHAR(20) NULL) 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | > upgrade 0340-jpatoscapolicytype_triggers.sql 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_triggers (name VARCHAR(120) NULL, version VARCHAR(20) NULL, TRIGGERS VARCHAR(255) NULL) 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | > upgrade 0350-jpatoscaproperty_constraints.sql 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaproperty_constraints (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, CONSTRAINTS VARCHAR(255) NULL) 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | > upgrade 0360-jpatoscaproperty_metadata.sql 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaproperty_metadata (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | > upgrade 0370-jpatoscarelationshiptype_metadata.sql 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarelationshiptype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | > upgrade 0380-jpatoscarelationshiptype_properties.sql 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarelationshiptype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | > upgrade 0390-jpatoscarequirement_metadata.sql 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | > upgrade 0400-jpatoscarequirement_occurrences.sql 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_occurrences (name VARCHAR(120) NULL, version VARCHAR(20) NULL, OCCURRENCES INT DEFAULT NULL) 23:17:10 policy-db-migrator | -------------- 23:17:10 kafka | sasl.oauthbearer.token.endpoint.url = null 23:17:10 kafka | sasl.server.callback.handler.class = null 23:17:10 kafka | sasl.server.max.receive.size = 524288 23:17:10 kafka | security.inter.broker.protocol = PLAINTEXT 23:17:10 kafka | security.providers = null 23:17:10 kafka | server.max.startup.time.ms = 9223372036854775807 23:17:10 kafka | socket.connection.setup.timeout.max.ms = 30000 23:17:10 kafka | socket.connection.setup.timeout.ms = 10000 23:17:10 kafka | socket.listen.backlog.size = 50 23:17:10 kafka | socket.receive.buffer.bytes = 102400 23:17:10 kafka | socket.request.max.bytes = 104857600 23:17:10 kafka | socket.send.buffer.bytes = 102400 23:17:10 kafka | ssl.cipher.suites = [] 23:17:10 kafka | ssl.client.auth = none 23:17:10 kafka | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:17:10 kafka | ssl.endpoint.identification.algorithm = https 23:17:10 kafka | ssl.engine.factory.class = null 23:17:10 kafka | ssl.key.password = null 23:17:10 kafka | ssl.keymanager.algorithm = SunX509 23:17:10 kafka | ssl.keystore.certificate.chain = null 23:17:10 kafka | ssl.keystore.key = null 23:17:10 kafka | ssl.keystore.location = null 23:17:10 kafka | ssl.keystore.password = null 23:17:10 kafka | ssl.keystore.type = JKS 23:17:10 kafka | ssl.principal.mapping.rules = DEFAULT 23:17:10 kafka | ssl.protocol = TLSv1.3 23:17:10 kafka | ssl.provider = null 23:17:10 kafka | ssl.secure.random.implementation = null 23:17:10 kafka | ssl.trustmanager.algorithm = PKIX 23:17:10 kafka | ssl.truststore.certificates = null 23:17:10 kafka | ssl.truststore.location = null 23:17:10 kafka | ssl.truststore.password = null 23:17:10 kafka | ssl.truststore.type = JKS 23:17:10 kafka | transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000 23:17:10 kafka | transaction.max.timeout.ms = 900000 23:17:10 kafka | transaction.partition.verification.enable = true 23:17:10 kafka | transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 23:17:10 kafka | transaction.state.log.load.buffer.size = 5242880 23:17:10 kafka | transaction.state.log.min.isr = 2 23:17:10 kafka | transaction.state.log.num.partitions = 50 23:17:10 kafka | transaction.state.log.replication.factor = 3 23:17:10 kafka | transaction.state.log.segment.bytes = 104857600 23:17:10 kafka | transactional.id.expiration.ms = 604800000 23:17:10 kafka | unclean.leader.election.enable = false 23:17:10 kafka | unstable.api.versions.enable = false 23:17:10 kafka | zookeeper.clientCnxnSocket = null 23:17:10 kafka | zookeeper.connect = zookeeper:2181 23:17:10 kafka | zookeeper.connection.timeout.ms = null 23:17:10 kafka | zookeeper.max.in.flight.requests = 10 23:17:10 kafka | zookeeper.metadata.migration.enable = false 23:17:10 kafka | zookeeper.session.timeout.ms = 18000 23:17:10 kafka | zookeeper.set.acl = false 23:17:10 kafka | zookeeper.ssl.cipher.suites = null 23:17:10 kafka | zookeeper.ssl.client.enable = false 23:17:10 kafka | zookeeper.ssl.crl.enable = false 23:17:10 kafka | zookeeper.ssl.enabled.protocols = null 23:17:10 kafka | zookeeper.ssl.endpoint.identification.algorithm = HTTPS 23:17:10 kafka | zookeeper.ssl.keystore.location = null 23:17:10 kafka | zookeeper.ssl.keystore.password = null 23:17:10 kafka | zookeeper.ssl.keystore.type = null 23:17:10 policy-pap | sasl.mechanism = GSSAPI 23:17:10 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 23:17:10 policy-pap | sasl.oauthbearer.expected.audience = null 23:17:10 policy-pap | sasl.oauthbearer.expected.issuer = null 23:17:10 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:17:10 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:17:10 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:17:10 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 23:17:10 policy-pap | sasl.oauthbearer.scope.claim.name = scope 23:17:10 policy-pap | sasl.oauthbearer.sub.claim.name = sub 23:17:10 policy-pap | sasl.oauthbearer.token.endpoint.url = null 23:17:10 policy-pap | security.protocol = PLAINTEXT 23:17:10 policy-pap | security.providers = null 23:17:10 policy-pap | send.buffer.bytes = 131072 23:17:10 policy-pap | session.timeout.ms = 45000 23:17:10 policy-pap | socket.connection.setup.timeout.max.ms = 30000 23:17:10 policy-pap | socket.connection.setup.timeout.ms = 10000 23:17:10 policy-pap | ssl.cipher.suites = null 23:17:10 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:17:10 policy-pap | ssl.endpoint.identification.algorithm = https 23:17:10 policy-pap | ssl.engine.factory.class = null 23:17:10 policy-pap | ssl.key.password = null 23:17:10 policy-pap | ssl.keymanager.algorithm = SunX509 23:17:10 policy-pap | ssl.keystore.certificate.chain = null 23:17:10 policy-pap | ssl.keystore.key = null 23:17:10 policy-pap | ssl.keystore.location = null 23:17:10 policy-pap | ssl.keystore.password = null 23:17:10 policy-pap | ssl.keystore.type = JKS 23:17:10 policy-pap | ssl.protocol = TLSv1.3 23:17:10 policy-pap | ssl.provider = null 23:17:10 policy-pap | ssl.secure.random.implementation = null 23:17:10 policy-pap | ssl.trustmanager.algorithm = PKIX 23:17:10 policy-pap | ssl.truststore.certificates = null 23:17:10 policy-pap | ssl.truststore.location = null 23:17:10 policy-pap | ssl.truststore.password = null 23:17:10 policy-pap | ssl.truststore.type = JKS 23:17:10 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:17:10 policy-pap | 23:17:10 policy-pap | [2024-03-09T23:15:09.086+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 23:17:10 policy-pap | [2024-03-09T23:15:09.087+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 23:17:10 policy-pap | [2024-03-09T23:15:09.087+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1710026109084 23:17:10 policy-pap | [2024-03-09T23:15:09.090+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-99d6a240-42a3-48b7-904b-df55de280eab-1, groupId=99d6a240-42a3-48b7-904b-df55de280eab] Subscribed to topic(s): policy-pdp-pap 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | > upgrade 0410-jpatoscarequirement_properties.sql 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | > upgrade 0420-jpatoscaservicetemplate_metadata.sql 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaservicetemplate_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | > upgrade 0430-jpatoscatopologytemplate_inputs.sql 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscatopologytemplate_inputs (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, INPUTS LONGBLOB DEFAULT NULL, INPUTS_KEY VARCHAR(255) NULL) 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | > upgrade 0440-pdpgroup_pdpsubgroup.sql 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpgroup_pdpsubgroup (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPGROUP_PDPSUBGROUP (name, version, parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | > upgrade 0450-pdpgroup.sql 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpgroup (`DESCRIPTION` VARCHAR(255) NULL, PDPGROUPSTATE INT DEFAULT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_PDPGROUP (name, version)) 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | > upgrade 0460-pdppolicystatus.sql 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-apex-pdp | security.protocol = PLAINTEXT 23:17:10 policy-apex-pdp | security.providers = null 23:17:10 policy-apex-pdp | send.buffer.bytes = 131072 23:17:10 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 23:17:10 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 23:17:10 policy-apex-pdp | ssl.cipher.suites = null 23:17:10 policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:17:10 policy-apex-pdp | ssl.endpoint.identification.algorithm = https 23:17:10 policy-apex-pdp | ssl.engine.factory.class = null 23:17:10 policy-apex-pdp | ssl.key.password = null 23:17:10 policy-apex-pdp | ssl.keymanager.algorithm = SunX509 23:17:10 policy-apex-pdp | ssl.keystore.certificate.chain = null 23:17:10 policy-apex-pdp | ssl.keystore.key = null 23:17:10 policy-apex-pdp | ssl.keystore.location = null 23:17:10 policy-apex-pdp | ssl.keystore.password = null 23:17:10 policy-apex-pdp | ssl.keystore.type = JKS 23:17:10 policy-apex-pdp | ssl.protocol = TLSv1.3 23:17:10 policy-apex-pdp | ssl.provider = null 23:17:10 policy-apex-pdp | ssl.secure.random.implementation = null 23:17:10 policy-apex-pdp | ssl.trustmanager.algorithm = PKIX 23:17:10 policy-apex-pdp | ssl.truststore.certificates = null 23:17:10 policy-apex-pdp | ssl.truststore.location = null 23:17:10 policy-apex-pdp | ssl.truststore.password = null 23:17:10 policy-apex-pdp | ssl.truststore.type = JKS 23:17:10 policy-apex-pdp | transaction.timeout.ms = 60000 23:17:10 policy-apex-pdp | transactional.id = null 23:17:10 policy-apex-pdp | value.serializer = class org.apache.kafka.common.serialization.StringSerializer 23:17:10 policy-apex-pdp | 23:17:10 policy-apex-pdp | [2024-03-09T23:15:13.229+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. 23:17:10 policy-apex-pdp | [2024-03-09T23:15:13.247+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 23:17:10 policy-apex-pdp | [2024-03-09T23:15:13.247+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 23:17:10 policy-apex-pdp | [2024-03-09T23:15:13.247+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1710026113247 23:17:10 policy-apex-pdp | [2024-03-09T23:15:13.247+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=80407e59-d832-4bab-8846-6f714a30495b, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created 23:17:10 policy-apex-pdp | [2024-03-09T23:15:13.247+00:00|INFO|ServiceManager|main] service manager starting set alive 23:17:10 policy-apex-pdp | [2024-03-09T23:15:13.247+00:00|INFO|ServiceManager|main] service manager starting register pdp status context object 23:17:10 policy-apex-pdp | [2024-03-09T23:15:13.249+00:00|INFO|ServiceManager|main] service manager starting topic sinks 23:17:10 policy-apex-pdp | [2024-03-09T23:15:13.250+00:00|INFO|ServiceManager|main] service manager starting Pdp Status publisher 23:17:10 policy-apex-pdp | [2024-03-09T23:15:13.252+00:00|INFO|ServiceManager|main] service manager starting Register pdp update listener 23:17:10 policy-apex-pdp | [2024-03-09T23:15:13.252+00:00|INFO|ServiceManager|main] service manager starting Register pdp state change request dispatcher 23:17:10 policy-apex-pdp | [2024-03-09T23:15:13.252+00:00|INFO|ServiceManager|main] service manager starting Message Dispatcher 23:17:10 policy-apex-pdp | [2024-03-09T23:15:13.252+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=dbe26ca2-4841-4841-92e1-919ee240973d, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@e077866 23:17:10 policy-apex-pdp | [2024-03-09T23:15:13.253+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=dbe26ca2-4841-4841-92e1-919ee240973d, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: register: start not attempted 23:17:10 policy-apex-pdp | [2024-03-09T23:15:13.253+00:00|INFO|ServiceManager|main] service manager starting Create REST server 23:17:10 policy-apex-pdp | [2024-03-09T23:15:13.276+00:00|INFO|OrderedServiceImpl|Timer-0] ***** OrderedServiceImpl implementers: 23:17:10 policy-apex-pdp | [] 23:17:10 policy-apex-pdp | [2024-03-09T23:15:13.280+00:00|INFO|network|Timer-0] [OUT|KAFKA|policy-pdp-pap] 23:17:10 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"66cb7bfe-187d-46c5-9f81-0d27dbc95ad8","timestampMs":1710026113253,"name":"apex-a4761c86-ccce-426b-830a-adbddd25197d","pdpGroup":"defaultGroup"} 23:17:10 policy-apex-pdp | [2024-03-09T23:15:13.433+00:00|INFO|ServiceManager|main] service manager starting Rest Server 23:17:10 policy-apex-pdp | [2024-03-09T23:15:13.433+00:00|INFO|ServiceManager|main] service manager starting 23:17:10 policy-apex-pdp | [2024-03-09T23:15:13.433+00:00|INFO|ServiceManager|main] service manager starting REST RestServerParameters 23:17:10 policy-apex-pdp | [2024-03-09T23:15:13.433+00:00|INFO|JettyServletServer|main] JettyJerseyServer [Jerseyservlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-3591009c==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@634b550e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-64c2b546==org.glassfish.jersey.servlet.ServletContainer@2d1bc350{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=Server@5ebd56e9{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@63f34b70{/,null,STOPPED}, connector=RestServerParameters@5d25e6bb{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=null, servlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-3591009c==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@634b550e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-64c2b546==org.glassfish.jersey.servlet.ServletContainer@2d1bc350{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:17:10 policy-apex-pdp | [2024-03-09T23:15:13.445+00:00|INFO|ServiceManager|main] service manager started 23:17:10 policy-apex-pdp | [2024-03-09T23:15:13.445+00:00|INFO|ServiceManager|main] service manager started 23:17:10 policy-apex-pdp | [2024-03-09T23:15:13.445+00:00|INFO|ApexStarterMain|main] Started policy-apex-pdp service successfully. 23:17:10 policy-apex-pdp | [2024-03-09T23:15:13.446+00:00|INFO|JettyServletServer|RestServerParameters-6969] JettyJerseyServer [Jerseyservlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-3591009c==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@634b550e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-64c2b546==org.glassfish.jersey.servlet.ServletContainer@2d1bc350{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=Server@5ebd56e9{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@63f34b70{/,null,STOPPED}, connector=RestServerParameters@5d25e6bb{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=Thread[RestServerParameters-6969,5,main], servlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-3591009c==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@634b550e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-64c2b546==org.glassfish.jersey.servlet.ServletContainer@2d1bc350{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:17:10 policy-apex-pdp | [2024-03-09T23:15:13.617+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-dbe26ca2-4841-4841-92e1-919ee240973d-2, groupId=dbe26ca2-4841-4841-92e1-919ee240973d] Cluster ID: GSgJsqhRTlKOoxzH83EoHQ 23:17:10 policy-apex-pdp | [2024-03-09T23:15:13.618+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: GSgJsqhRTlKOoxzH83EoHQ 23:17:10 policy-apex-pdp | [2024-03-09T23:15:13.619+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-dbe26ca2-4841-4841-92e1-919ee240973d-2, groupId=dbe26ca2-4841-4841-92e1-919ee240973d] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) 23:17:10 policy-apex-pdp | [2024-03-09T23:15:13.619+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 2 with epoch 0 23:17:10 policy-apex-pdp | [2024-03-09T23:15:13.625+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-dbe26ca2-4841-4841-92e1-919ee240973d-2, groupId=dbe26ca2-4841-4841-92e1-919ee240973d] (Re-)joining group 23:17:10 policy-apex-pdp | [2024-03-09T23:15:13.655+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-dbe26ca2-4841-4841-92e1-919ee240973d-2, groupId=dbe26ca2-4841-4841-92e1-919ee240973d] Request joining group due to: need to re-join with the given member-id: consumer-dbe26ca2-4841-4841-92e1-919ee240973d-2-b749b30d-3583-4260-a790-ab3479b00a8b 23:17:10 policy-apex-pdp | [2024-03-09T23:15:13.655+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-dbe26ca2-4841-4841-92e1-919ee240973d-2, groupId=dbe26ca2-4841-4841-92e1-919ee240973d] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) 23:17:10 policy-apex-pdp | [2024-03-09T23:15:13.656+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-dbe26ca2-4841-4841-92e1-919ee240973d-2, groupId=dbe26ca2-4841-4841-92e1-919ee240973d] (Re-)joining group 23:17:10 policy-apex-pdp | [2024-03-09T23:15:14.153+00:00|INFO|GsonMessageBodyHandler|RestServerParameters-6969] Using GSON for REST calls 23:17:10 policy-apex-pdp | [2024-03-09T23:15:14.155+00:00|INFO|YamlMessageBodyHandler|RestServerParameters-6969] Accepting YAML for REST calls 23:17:10 policy-apex-pdp | [2024-03-09T23:15:16.663+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-dbe26ca2-4841-4841-92e1-919ee240973d-2, groupId=dbe26ca2-4841-4841-92e1-919ee240973d] Successfully joined group with generation Generation{generationId=1, memberId='consumer-dbe26ca2-4841-4841-92e1-919ee240973d-2-b749b30d-3583-4260-a790-ab3479b00a8b', protocol='range'} 23:17:10 policy-apex-pdp | [2024-03-09T23:15:16.675+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-dbe26ca2-4841-4841-92e1-919ee240973d-2, groupId=dbe26ca2-4841-4841-92e1-919ee240973d] Finished assignment for group at generation 1: {consumer-dbe26ca2-4841-4841-92e1-919ee240973d-2-b749b30d-3583-4260-a790-ab3479b00a8b=Assignment(partitions=[policy-pdp-pap-0])} 23:17:10 policy-apex-pdp | [2024-03-09T23:15:16.691+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-dbe26ca2-4841-4841-92e1-919ee240973d-2, groupId=dbe26ca2-4841-4841-92e1-919ee240973d] Successfully synced group in generation Generation{generationId=1, memberId='consumer-dbe26ca2-4841-4841-92e1-919ee240973d-2-b749b30d-3583-4260-a790-ab3479b00a8b', protocol='range'} 23:17:10 policy-apex-pdp | [2024-03-09T23:15:16.692+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-dbe26ca2-4841-4841-92e1-919ee240973d-2, groupId=dbe26ca2-4841-4841-92e1-919ee240973d] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) 23:17:10 policy-apex-pdp | [2024-03-09T23:15:16.695+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-dbe26ca2-4841-4841-92e1-919ee240973d-2, groupId=dbe26ca2-4841-4841-92e1-919ee240973d] Adding newly assigned partitions: policy-pdp-pap-0 23:17:10 policy-apex-pdp | [2024-03-09T23:15:16.702+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-dbe26ca2-4841-4841-92e1-919ee240973d-2, groupId=dbe26ca2-4841-4841-92e1-919ee240973d] Found no committed offset for partition policy-pdp-pap-0 23:17:10 policy-apex-pdp | [2024-03-09T23:15:16.717+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-dbe26ca2-4841-4841-92e1-919ee240973d-2, groupId=dbe26ca2-4841-4841-92e1-919ee240973d] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. 23:17:10 policy-apex-pdp | [2024-03-09T23:15:33.253+00:00|INFO|network|Timer-0] [OUT|KAFKA|policy-pdp-pap] 23:17:10 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"51fa2b77-ded6-4b99-a2e0-cf0de6105024","timestampMs":1710026133252,"name":"apex-a4761c86-ccce-426b-830a-adbddd25197d","pdpGroup":"defaultGroup"} 23:17:10 policy-apex-pdp | [2024-03-09T23:15:33.279+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:17:10 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"51fa2b77-ded6-4b99-a2e0-cf0de6105024","timestampMs":1710026133252,"name":"apex-a4761c86-ccce-426b-830a-adbddd25197d","pdpGroup":"defaultGroup"} 23:17:10 policy-apex-pdp | [2024-03-09T23:15:33.283+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 23:17:10 policy-apex-pdp | [2024-03-09T23:15:33.453+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:17:10 policy-apex-pdp | {"source":"pap-984d5b06-e23e-4e42-abee-2db8b09e6a7c","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"677fe6da-3e09-4ff6-a3fe-8f782868df46","timestampMs":1710026133383,"name":"apex-a4761c86-ccce-426b-830a-adbddd25197d","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:17:10 policy-apex-pdp | [2024-03-09T23:15:33.470+00:00|WARN|Registry|KAFKA-source-policy-pdp-pap] replacing previously registered: object:pdp/status/publisher 23:17:10 policy-apex-pdp | [2024-03-09T23:15:33.470+00:00|INFO|network|Timer-1] [OUT|KAFKA|policy-pdp-pap] 23:17:10 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"79c95b38-e2b1-49fc-a4ed-e70662c5a462","timestampMs":1710026133470,"name":"apex-a4761c86-ccce-426b-830a-adbddd25197d","pdpGroup":"defaultGroup"} 23:17:10 policy-apex-pdp | [2024-03-09T23:15:33.471+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] 23:17:10 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"677fe6da-3e09-4ff6-a3fe-8f782868df46","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"f21395d0-17b2-48db-bc99-adf2c0e047ae","timestampMs":1710026133471,"name":"apex-a4761c86-ccce-426b-830a-adbddd25197d","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:17:10 policy-apex-pdp | [2024-03-09T23:15:33.493+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:17:10 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"79c95b38-e2b1-49fc-a4ed-e70662c5a462","timestampMs":1710026133470,"name":"apex-a4761c86-ccce-426b-830a-adbddd25197d","pdpGroup":"defaultGroup"} 23:17:10 policy-apex-pdp | [2024-03-09T23:15:33.493+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 23:17:10 policy-apex-pdp | [2024-03-09T23:15:33.498+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:21.003974458Z level=info msg="Executing migration" id="create test_data table" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:21.005665874Z level=info msg="Migration successfully executed" id="create test_data table" duration=1.691306ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:21.04690987Z level=info msg="Executing migration" id="create dashboard_version table v1" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:21.049538017Z level=info msg="Migration successfully executed" id="create dashboard_version table v1" duration=2.627617ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:21.092615372Z level=info msg="Executing migration" id="add index dashboard_version.dashboard_id" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:21.094160275Z level=info msg="Migration successfully executed" id="add index dashboard_version.dashboard_id" duration=1.546153ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:21.121975513Z level=info msg="Executing migration" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:21.124020397Z level=info msg="Migration successfully executed" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" duration=2.043544ms 23:17:10 kafka | zookeeper.ssl.ocsp.enable = false 23:17:10 kafka | zookeeper.ssl.protocol = TLSv1.2 23:17:10 kafka | zookeeper.ssl.truststore.location = null 23:17:10 kafka | zookeeper.ssl.truststore.password = null 23:17:10 kafka | zookeeper.ssl.truststore.type = null 23:17:10 kafka | (kafka.server.KafkaConfig) 23:17:10 kafka | [2024-03-09 23:14:16,209] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 23:17:10 kafka | [2024-03-09 23:14:16,210] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 23:17:10 kafka | [2024-03-09 23:14:16,212] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 23:17:10 kafka | [2024-03-09 23:14:16,216] INFO [ThrottledChannelReaper-ControllerMutation]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 23:17:10 kafka | [2024-03-09 23:14:16,253] INFO Loading logs from log dirs ArraySeq(/var/lib/kafka/data) (kafka.log.LogManager) 23:17:10 kafka | [2024-03-09 23:14:16,259] INFO No logs found to be loaded in /var/lib/kafka/data (kafka.log.LogManager) 23:17:10 kafka | [2024-03-09 23:14:16,270] INFO Loaded 0 logs in 16ms (kafka.log.LogManager) 23:17:10 kafka | [2024-03-09 23:14:16,272] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager) 23:17:10 kafka | [2024-03-09 23:14:16,273] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager) 23:17:10 kafka | [2024-03-09 23:14:16,285] INFO Starting the log cleaner (kafka.log.LogCleaner) 23:17:10 kafka | [2024-03-09 23:14:16,337] INFO [kafka-log-cleaner-thread-0]: Starting (kafka.log.LogCleaner$CleanerThread) 23:17:10 kafka | [2024-03-09 23:14:16,378] INFO [feature-zk-node-event-process-thread]: Starting (kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread) 23:17:10 kafka | [2024-03-09 23:14:16,393] INFO Feature ZK node at path: /feature does not exist (kafka.server.FinalizedFeatureChangeListener) 23:17:10 kafka | [2024-03-09 23:14:16,425] INFO [zk-broker-1-to-controller-forwarding-channel-manager]: Starting (kafka.server.BrokerToControllerRequestThread) 23:17:10 kafka | [2024-03-09 23:14:16,823] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) 23:17:10 kafka | [2024-03-09 23:14:16,843] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT) (kafka.network.SocketServer) 23:17:10 kafka | [2024-03-09 23:14:16,843] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) 23:17:10 kafka | [2024-03-09 23:14:16,848] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT_HOST) (kafka.network.SocketServer) 23:17:10 kafka | [2024-03-09 23:14:16,853] INFO [zk-broker-1-to-controller-alter-partition-channel-manager]: Starting (kafka.server.BrokerToControllerRequestThread) 23:17:10 kafka | [2024-03-09 23:14:16,879] INFO [ExpirationReaper-1-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:17:10 kafka | [2024-03-09 23:14:16,881] INFO [ExpirationReaper-1-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:17:10 kafka | [2024-03-09 23:14:16,885] INFO [ExpirationReaper-1-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:17:10 kafka | [2024-03-09 23:14:16,886] INFO [ExpirationReaper-1-ElectLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:17:10 kafka | [2024-03-09 23:14:16,889] INFO [ExpirationReaper-1-RemoteFetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:17:10 kafka | [2024-03-09 23:14:16,904] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler) 23:17:10 kafka | [2024-03-09 23:14:16,904] INFO [AddPartitionsToTxnSenderThread-1]: Starting (kafka.server.AddPartitionsToTxnManager) 23:17:10 kafka | [2024-03-09 23:14:16,931] INFO Creating /brokers/ids/1 (is it secure? false) (kafka.zk.KafkaZkClient) 23:17:10 kafka | [2024-03-09 23:14:16,990] INFO Stat of the created znode at /brokers/ids/1 is: 27,27,1710026056956,1710026056956,1,0,0,72057609418178561,258,0,27 23:17:10 kafka | (kafka.zk.KafkaZkClient) 23:17:10 kafka | [2024-03-09 23:14:16,991] INFO Registered broker 1 at path /brokers/ids/1 with addresses: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092, czxid (broker epoch): 27 (kafka.zk.KafkaZkClient) 23:17:10 kafka | [2024-03-09 23:14:17,147] INFO [ControllerEventThread controllerId=1] Starting (kafka.controller.ControllerEventManager$ControllerEventThread) 23:17:10 kafka | [2024-03-09 23:14:17,154] INFO [ExpirationReaper-1-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:17:10 kafka | [2024-03-09 23:14:17,163] INFO [ExpirationReaper-1-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:17:10 kafka | [2024-03-09 23:14:17,164] INFO [ExpirationReaper-1-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:17:10 kafka | [2024-03-09 23:14:17,180] INFO [GroupCoordinator 1]: Starting up. (kafka.coordinator.group.GroupCoordinator) 23:17:10 kafka | [2024-03-09 23:14:17,243] INFO Successfully created /controller_epoch with initial epoch 0 (kafka.zk.KafkaZkClient) 23:17:10 kafka | [2024-03-09 23:14:17,249] INFO [GroupCoordinator 1]: Startup complete. (kafka.coordinator.group.GroupCoordinator) 23:17:10 kafka | [2024-03-09 23:14:17,263] INFO [Controller id=1] 1 successfully elected as the controller. Epoch incremented to 1 and epoch zk version is now 1 (kafka.controller.KafkaController) 23:17:10 kafka | [2024-03-09 23:14:17,269] INFO [Controller id=1] Creating FeatureZNode at path: /feature with contents: FeatureZNode(2,Enabled,Map()) (kafka.controller.KafkaController) 23:17:10 policy-pap | [2024-03-09T23:15:09.091+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 23:17:10 policy-pap | allow.auto.create.topics = true 23:17:10 policy-pap | auto.commit.interval.ms = 5000 23:17:10 policy-pap | auto.include.jmx.reporter = true 23:17:10 policy-pap | auto.offset.reset = latest 23:17:10 policy-pap | bootstrap.servers = [kafka:9092] 23:17:10 policy-pap | check.crcs = true 23:17:10 policy-pap | client.dns.lookup = use_all_dns_ips 23:17:10 policy-pap | client.id = consumer-policy-pap-2 23:17:10 policy-pap | client.rack = 23:17:10 policy-pap | connections.max.idle.ms = 540000 23:17:10 policy-pap | default.api.timeout.ms = 60000 23:17:10 policy-pap | enable.auto.commit = true 23:17:10 policy-pap | exclude.internal.topics = true 23:17:10 policy-pap | fetch.max.bytes = 52428800 23:17:10 policy-pap | fetch.max.wait.ms = 500 23:17:10 policy-pap | fetch.min.bytes = 1 23:17:10 policy-pap | group.id = policy-pap 23:17:10 policy-pap | group.instance.id = null 23:17:10 policy-pap | heartbeat.interval.ms = 3000 23:17:10 policy-pap | interceptor.classes = [] 23:17:10 policy-pap | internal.leave.group.on.close = true 23:17:10 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 23:17:10 policy-pap | isolation.level = read_uncommitted 23:17:10 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:17:10 policy-pap | max.partition.fetch.bytes = 1048576 23:17:10 policy-pap | max.poll.interval.ms = 300000 23:17:10 policy-pap | max.poll.records = 500 23:17:10 policy-pap | metadata.max.age.ms = 300000 23:17:10 policy-pap | metric.reporters = [] 23:17:10 policy-pap | metrics.num.samples = 2 23:17:10 policy-pap | metrics.recording.level = INFO 23:17:10 policy-pap | metrics.sample.window.ms = 30000 23:17:10 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 23:17:10 policy-pap | receive.buffer.bytes = 65536 23:17:10 policy-pap | reconnect.backoff.max.ms = 1000 23:17:10 policy-pap | reconnect.backoff.ms = 50 23:17:10 policy-pap | request.timeout.ms = 30000 23:17:10 policy-pap | retry.backoff.ms = 100 23:17:10 policy-pap | sasl.client.callback.handler.class = null 23:17:10 policy-pap | sasl.jaas.config = null 23:17:10 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:17:10 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 23:17:10 policy-pap | sasl.kerberos.service.name = null 23:17:10 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 23:17:10 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 23:17:10 policy-pap | sasl.login.callback.handler.class = null 23:17:10 policy-pap | sasl.login.class = null 23:17:10 policy-pap | sasl.login.connect.timeout.ms = null 23:17:10 policy-pap | sasl.login.read.timeout.ms = null 23:17:10 policy-pap | sasl.login.refresh.buffer.seconds = 300 23:17:10 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdppolicystatus (DEPLOY BOOLEAN DEFAULT 0, PDPGROUP VARCHAR(255) DEFAULT NULL, PDPTYPE VARCHAR(255) DEFAULT NULL, STATE INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PRIMARY KEY PK_PDPPOLICYSTATUS (parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | > upgrade 0470-pdp.sql 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdp (HEALTHY INT DEFAULT NULL, MESSAGE VARCHAR(255) DEFAULT NULL, PDPSTATE INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDP (parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | > upgrade 0480-pdpstatistics.sql 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpstatistics (PDPGROUPNAME VARCHAR(120) NULL, PDPSUBGROUPNAME VARCHAR(120) NULL, POLICYDEPLOYCOUNT BIGINT DEFAULT NULL, POLICYDEPLOYFAILCOUNT BIGINT DEFAULT NULL, POLICYDEPLOYSUCCESSCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDFAILCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDSUCCESSCOUNT BIGINT DEFAULT NULL, timeStamp datetime NOT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_PDPSTATISTICS (timeStamp, name, version)) 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | > upgrade 0490-pdpsubgroup_pdp.sql 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpsubgroup_pdp (pdpParentKeyName VARCHAR(120) NOT NULL, pdpParentKeyVersion VARCHAR(15) NOT NULL, pdpParentLocalName VARCHAR(120) NOT NULL, pdpLocalName VARCHAR(120) NOT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPSUBGROUP_PDP (pdpParentKeyName, pdpParentKeyVersion, pdpParentLocalName, pdpLocalName, parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | > upgrade 0500-pdpsubgroup.sql 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpsubgroup (CURRENTINSTANCECOUNT INT DEFAULT NULL, DESIREDINSTANCECOUNT INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPSUBGROUP (parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | > upgrade 0510-toscacapabilityassignment.sql 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignment (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENT(name, version)) 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | > upgrade 0520-toscacapabilityassignments.sql 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignments (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENTS (name, version)) 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | > upgrade 0530-toscacapabilityassignments_toscacapabilityassignment.sql 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignments_toscacapabilityassignment (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENTS_TOSCACAPABILITYASSIGNMENT (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | > upgrade 0540-toscacapabilitytype.sql 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPE (name, version)) 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | 23:17:10 policy-pap | sasl.login.refresh.min.period.seconds = 60 23:17:10 policy-pap | sasl.login.refresh.window.factor = 0.8 23:17:10 policy-pap | sasl.login.refresh.window.jitter = 0.05 23:17:10 policy-pap | sasl.login.retry.backoff.max.ms = 10000 23:17:10 policy-pap | sasl.login.retry.backoff.ms = 100 23:17:10 policy-pap | sasl.mechanism = GSSAPI 23:17:10 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 23:17:10 policy-pap | sasl.oauthbearer.expected.audience = null 23:17:10 policy-pap | sasl.oauthbearer.expected.issuer = null 23:17:10 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:17:10 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:17:10 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:17:10 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 23:17:10 policy-pap | sasl.oauthbearer.scope.claim.name = scope 23:17:10 policy-pap | sasl.oauthbearer.sub.claim.name = sub 23:17:10 policy-pap | sasl.oauthbearer.token.endpoint.url = null 23:17:10 policy-pap | security.protocol = PLAINTEXT 23:17:10 policy-pap | security.providers = null 23:17:10 policy-pap | send.buffer.bytes = 131072 23:17:10 policy-pap | session.timeout.ms = 45000 23:17:10 policy-pap | socket.connection.setup.timeout.max.ms = 30000 23:17:10 policy-pap | socket.connection.setup.timeout.ms = 10000 23:17:10 policy-pap | ssl.cipher.suites = null 23:17:10 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:17:10 policy-pap | ssl.endpoint.identification.algorithm = https 23:17:10 policy-pap | ssl.engine.factory.class = null 23:17:10 policy-pap | ssl.key.password = null 23:17:10 policy-pap | ssl.keymanager.algorithm = SunX509 23:17:10 policy-pap | ssl.keystore.certificate.chain = null 23:17:10 policy-pap | ssl.keystore.key = null 23:17:10 policy-pap | ssl.keystore.location = null 23:17:10 policy-pap | ssl.keystore.password = null 23:17:10 policy-pap | ssl.keystore.type = JKS 23:17:10 policy-pap | ssl.protocol = TLSv1.3 23:17:10 policy-pap | ssl.provider = null 23:17:10 policy-pap | ssl.secure.random.implementation = null 23:17:10 policy-pap | ssl.trustmanager.algorithm = PKIX 23:17:10 policy-pap | ssl.truststore.certificates = null 23:17:10 policy-pap | ssl.truststore.location = null 23:17:10 policy-pap | ssl.truststore.password = null 23:17:10 policy-pap | ssl.truststore.type = JKS 23:17:10 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:17:10 policy-pap | 23:17:10 policy-pap | [2024-03-09T23:15:09.097+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 23:17:10 policy-pap | [2024-03-09T23:15:09.097+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 23:17:10 policy-pap | [2024-03-09T23:15:09.097+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1710026109097 23:17:10 policy-pap | [2024-03-09T23:15:09.098+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-policy-pap-2, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap 23:17:10 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"677fe6da-3e09-4ff6-a3fe-8f782868df46","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"f21395d0-17b2-48db-bc99-adf2c0e047ae","timestampMs":1710026133471,"name":"apex-a4761c86-ccce-426b-830a-adbddd25197d","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:17:10 policy-apex-pdp | [2024-03-09T23:15:33.498+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 23:17:10 policy-apex-pdp | [2024-03-09T23:15:33.537+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:17:10 policy-apex-pdp | {"source":"pap-984d5b06-e23e-4e42-abee-2db8b09e6a7c","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"b376ce6a-be29-4c6f-9177-d857e98b4d69","timestampMs":1710026133384,"name":"apex-a4761c86-ccce-426b-830a-adbddd25197d","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:17:10 policy-apex-pdp | [2024-03-09T23:15:33.540+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] 23:17:10 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"b376ce6a-be29-4c6f-9177-d857e98b4d69","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"ef94ea69-9cd9-4d2a-942c-2a0e2f2bf9e5","timestampMs":1710026133539,"name":"apex-a4761c86-ccce-426b-830a-adbddd25197d","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:17:10 policy-apex-pdp | [2024-03-09T23:15:33.549+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:17:10 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"b376ce6a-be29-4c6f-9177-d857e98b4d69","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"ef94ea69-9cd9-4d2a-942c-2a0e2f2bf9e5","timestampMs":1710026133539,"name":"apex-a4761c86-ccce-426b-830a-adbddd25197d","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:17:10 policy-apex-pdp | [2024-03-09T23:15:33.551+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 23:17:10 policy-apex-pdp | [2024-03-09T23:15:33.654+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:17:10 policy-apex-pdp | {"source":"pap-984d5b06-e23e-4e42-abee-2db8b09e6a7c","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"9dd920d4-481e-4f3b-96de-86d19b111a74","timestampMs":1710026133629,"name":"apex-a4761c86-ccce-426b-830a-adbddd25197d","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:17:10 policy-apex-pdp | [2024-03-09T23:15:33.656+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] 23:17:10 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"9dd920d4-481e-4f3b-96de-86d19b111a74","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"35213e2b-732e-402c-8250-b38ab5f418f5","timestampMs":1710026133656,"name":"apex-a4761c86-ccce-426b-830a-adbddd25197d","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:17:10 policy-apex-pdp | [2024-03-09T23:15:33.667+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:17:10 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"9dd920d4-481e-4f3b-96de-86d19b111a74","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"35213e2b-732e-402c-8250-b38ab5f418f5","timestampMs":1710026133656,"name":"apex-a4761c86-ccce-426b-830a-adbddd25197d","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:17:10 policy-apex-pdp | [2024-03-09T23:15:33.667+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 23:17:10 policy-apex-pdp | [2024-03-09T23:15:56.179+00:00|INFO|RequestLog|qtp1068445309-33] 172.17.0.2 - policyadmin [09/Mar/2024:23:15:56 +0000] "GET /metrics HTTP/1.1" 200 10650 "-" "Prometheus/2.50.1" 23:17:10 policy-apex-pdp | [2024-03-09T23:16:56.083+00:00|INFO|RequestLog|qtp1068445309-28] 172.17.0.2 - policyadmin [09/Mar/2024:23:16:56 +0000] "GET /metrics HTTP/1.1" 200 10650 "-" "Prometheus/2.50.1" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:21.156809971Z level=info msg="Executing migration" id="Set dashboard version to 1 where 0" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:21.157189659Z level=info msg="Migration successfully executed" id="Set dashboard version to 1 where 0" duration=378.868µs 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:21.16655611Z level=info msg="Executing migration" id="save existing dashboard data in dashboard_version table v1" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:21.167254405Z level=info msg="Migration successfully executed" id="save existing dashboard data in dashboard_version table v1" duration=697.365µs 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:21.228060741Z level=info msg="Executing migration" id="alter dashboard_version.data to mediumtext v1" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:21.228172504Z level=info msg="Migration successfully executed" id="alter dashboard_version.data to mediumtext v1" duration=127.103µs 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:21.244861762Z level=info msg="Executing migration" id="create team table" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:21.246063117Z level=info msg="Migration successfully executed" id="create team table" duration=1.200825ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:21.337634595Z level=info msg="Executing migration" id="add index team.org_id" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:21.339347651Z level=info msg="Migration successfully executed" id="add index team.org_id" duration=1.713196ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:21.396831956Z level=info msg="Executing migration" id="add unique index team_org_id_name" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:21.398530952Z level=info msg="Migration successfully executed" id="add unique index team_org_id_name" duration=1.698626ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:21.444366227Z level=info msg="Executing migration" id="Add column uid in team" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:21.451610883Z level=info msg="Migration successfully executed" id="Add column uid in team" duration=7.244036ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:21.529389623Z level=info msg="Executing migration" id="Update uid column values in team" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:21.529952705Z level=info msg="Migration successfully executed" id="Update uid column values in team" duration=566.672µs 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:21.591403795Z level=info msg="Executing migration" id="Add unique index team_org_id_uid" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:21.593107701Z level=info msg="Migration successfully executed" id="Add unique index team_org_id_uid" duration=1.703816ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:21.675605193Z level=info msg="Executing migration" id="create team member table" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:21.677066145Z level=info msg="Migration successfully executed" id="create team member table" duration=1.458581ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:21.80069369Z level=info msg="Executing migration" id="add index team_member.org_id" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:21.801790164Z level=info msg="Migration successfully executed" id="add index team_member.org_id" duration=1.096674ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:21.860027394Z level=info msg="Executing migration" id="add unique index team_member_org_id_team_id_user_id" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:21.861951295Z level=info msg="Migration successfully executed" id="add unique index team_member_org_id_team_id_user_id" duration=1.924101ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:21.951635732Z level=info msg="Executing migration" id="add index team_member.team_id" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:21.953370349Z level=info msg="Migration successfully executed" id="add index team_member.team_id" duration=1.748538ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:22.090019781Z level=info msg="Executing migration" id="Add column email to team table" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:22.097651555Z level=info msg="Migration successfully executed" id="Add column email to team table" duration=7.631654ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:22.152660915Z level=info msg="Executing migration" id="Add column external to team_member table" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:22.157682663Z level=info msg="Migration successfully executed" id="Add column external to team_member table" duration=5.025358ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:22.255302417Z level=info msg="Executing migration" id="Add column permission to team_member table" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:22.263164155Z level=info msg="Migration successfully executed" id="Add column permission to team_member table" duration=7.817368ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:22.367834191Z level=info msg="Executing migration" id="create dashboard acl table" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:22.369299352Z level=info msg="Migration successfully executed" id="create dashboard acl table" duration=1.469812ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:22.401156405Z level=info msg="Executing migration" id="add index dashboard_acl_dashboard_id" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:22.403035525Z level=info msg="Migration successfully executed" id="add index dashboard_acl_dashboard_id" duration=1.88344ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:22.446548049Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_user_id" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:22.448552022Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_user_id" duration=2.003652ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:22.473617389Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_team_id" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:22.475321996Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_team_id" duration=1.701357ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:22.481613561Z level=info msg="Executing migration" id="add index dashboard_acl_user_id" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:22.482703804Z level=info msg="Migration successfully executed" id="add index dashboard_acl_user_id" duration=1.091533ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:22.586201463Z level=info msg="Executing migration" id="add index dashboard_acl_team_id" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:22.587532742Z level=info msg="Migration successfully executed" id="add index dashboard_acl_team_id" duration=1.360929ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:22.647962498Z level=info msg="Executing migration" id="add index dashboard_acl_org_id_role" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:22.649731116Z level=info msg="Migration successfully executed" id="add index dashboard_acl_org_id_role" duration=1.770918ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:22.663394379Z level=info msg="Executing migration" id="add index dashboard_permission" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:22.664671716Z level=info msg="Migration successfully executed" id="add index dashboard_permission" duration=1.272417ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:22.724181723Z level=info msg="Executing migration" id="save default acl rules in dashboard_acl table" 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | > upgrade 0550-toscacapabilitytypes.sql 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPES (name, version)) 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | > upgrade 0560-toscacapabilitytypes_toscacapabilitytype.sql 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytypes_toscacapabilitytype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPES_TOSCACAPABILITYTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | > upgrade 0570-toscadatatype.sql 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCADATATYPE (name, version)) 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | > upgrade 0580-toscadatatypes.sql 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCADATATYPES (name, version)) 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | > upgrade 0590-toscadatatypes_toscadatatype.sql 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatypes_toscadatatype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCADATATYPES_TOSCADATATYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | > upgrade 0600-toscanodetemplate.sql 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplate (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, capabilitiesName VARCHAR(120) NULL, capabilitiesVersion VARCHAR(20) NULL, requirementsVersion VARCHAR(20) NULL, requirementsName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCANODETEMPLATE (name, version)) 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | > upgrade 0610-toscanodetemplates.sql 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplates (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCANODETEMPLATES (name, version)) 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:22.725080333Z level=info msg="Migration successfully executed" id="save default acl rules in dashboard_acl table" duration=898.33µs 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:22.761485643Z level=info msg="Executing migration" id="delete acl rules for deleted dashboards and folders" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:22.761962704Z level=info msg="Migration successfully executed" id="delete acl rules for deleted dashboards and folders" duration=476.96µs 23:17:10 policy-pap | [2024-03-09T23:15:09.437+00:00|INFO|PapDatabaseInitializer|main] Created initial pdpGroup in DB - PdpGroups(groups=[PdpGroup(name=defaultGroup, description=The default group that registers all supported policy types and pdps., pdpGroupState=ACTIVE, properties=null, pdpSubgroups=[PdpSubGroup(pdpType=apex, supportedPolicyTypes=[onap.policies.controlloop.operational.common.Apex 1.0.0, onap.policies.native.Apex 1.0.0], policies=[], currentInstanceCount=0, desiredInstanceCount=1, properties=null, pdpInstances=null)])]) from /opt/app/policy/pap/etc/mounted/groups.json 23:17:10 policy-pap | [2024-03-09T23:15:09.611+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning 23:17:10 policy-pap | [2024-03-09T23:15:09.941+00:00|INFO|DefaultSecurityFilterChain|main] Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@71d2261e, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@53917c92, org.springframework.security.web.context.SecurityContextHolderFilter@7c359808, org.springframework.security.web.header.HeaderWriterFilter@52963839, org.springframework.security.web.authentication.logout.LogoutFilter@6787bd41, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@39420d59, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@16361e61, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@1734b1a, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@1fa796a4, org.springframework.security.web.access.ExceptionTranslationFilter@7ce4498f, org.springframework.security.web.access.intercept.AuthorizationFilter@f287a4e] 23:17:10 policy-pap | [2024-03-09T23:15:10.978+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoint(s) beneath base path '' 23:17:10 policy-pap | [2024-03-09T23:15:11.097+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] 23:17:10 policy-pap | [2024-03-09T23:15:11.121+00:00|INFO|TomcatWebServer|main] Tomcat started on port(s): 6969 (http) with context path '/policy/pap/v1' 23:17:10 policy-pap | [2024-03-09T23:15:11.142+00:00|INFO|ServiceManager|main] Policy PAP starting 23:17:10 policy-pap | [2024-03-09T23:15:11.142+00:00|INFO|ServiceManager|main] Policy PAP starting Meter Registry 23:17:10 policy-pap | [2024-03-09T23:15:11.143+00:00|INFO|ServiceManager|main] Policy PAP starting PAP parameters 23:17:10 policy-pap | [2024-03-09T23:15:11.144+00:00|INFO|ServiceManager|main] Policy PAP starting Pdp Heartbeat Listener 23:17:10 policy-pap | [2024-03-09T23:15:11.144+00:00|INFO|ServiceManager|main] Policy PAP starting Response Request ID Dispatcher 23:17:10 policy-pap | [2024-03-09T23:15:11.145+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Request ID Dispatcher 23:17:10 policy-pap | [2024-03-09T23:15:11.145+00:00|INFO|ServiceManager|main] Policy PAP starting Response Message Dispatcher 23:17:10 policy-pap | [2024-03-09T23:15:11.150+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=99d6a240-42a3-48b7-904b-df55de280eab, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@1cf4d454 23:17:10 policy-pap | [2024-03-09T23:15:11.160+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=99d6a240-42a3-48b7-904b-df55de280eab, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 23:17:10 policy-pap | [2024-03-09T23:15:11.161+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 23:17:10 policy-pap | allow.auto.create.topics = true 23:17:10 policy-pap | auto.commit.interval.ms = 5000 23:17:10 policy-pap | auto.include.jmx.reporter = true 23:17:10 policy-pap | auto.offset.reset = latest 23:17:10 policy-pap | bootstrap.servers = [kafka:9092] 23:17:10 policy-pap | check.crcs = true 23:17:10 policy-pap | client.dns.lookup = use_all_dns_ips 23:17:10 policy-pap | client.id = consumer-99d6a240-42a3-48b7-904b-df55de280eab-3 23:17:10 policy-pap | client.rack = 23:17:10 policy-pap | connections.max.idle.ms = 540000 23:17:10 policy-pap | default.api.timeout.ms = 60000 23:17:10 policy-pap | enable.auto.commit = true 23:17:10 policy-pap | exclude.internal.topics = true 23:17:10 policy-pap | fetch.max.bytes = 52428800 23:17:10 policy-pap | fetch.max.wait.ms = 500 23:17:10 policy-pap | fetch.min.bytes = 1 23:17:10 policy-pap | group.id = 99d6a240-42a3-48b7-904b-df55de280eab 23:17:10 policy-pap | group.instance.id = null 23:17:10 policy-pap | heartbeat.interval.ms = 3000 23:17:10 policy-pap | interceptor.classes = [] 23:17:10 policy-pap | internal.leave.group.on.close = true 23:17:10 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 23:17:10 policy-pap | isolation.level = read_uncommitted 23:17:10 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:17:10 policy-pap | max.partition.fetch.bytes = 1048576 23:17:10 policy-pap | max.poll.interval.ms = 300000 23:17:10 policy-pap | max.poll.records = 500 23:17:10 policy-pap | metadata.max.age.ms = 300000 23:17:10 policy-pap | metric.reporters = [] 23:17:10 policy-pap | metrics.num.samples = 2 23:17:10 policy-pap | metrics.recording.level = INFO 23:17:10 policy-pap | metrics.sample.window.ms = 30000 23:17:10 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 23:17:10 policy-pap | receive.buffer.bytes = 65536 23:17:10 policy-pap | reconnect.backoff.max.ms = 1000 23:17:10 policy-pap | reconnect.backoff.ms = 50 23:17:10 policy-pap | request.timeout.ms = 30000 23:17:10 policy-pap | retry.backoff.ms = 100 23:17:10 policy-pap | sasl.client.callback.handler.class = null 23:17:10 policy-pap | sasl.jaas.config = null 23:17:10 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:17:10 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 23:17:10 policy-pap | sasl.kerberos.service.name = null 23:17:10 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 23:17:10 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 23:17:10 policy-pap | sasl.login.callback.handler.class = null 23:17:10 policy-pap | sasl.login.class = null 23:17:10 policy-pap | sasl.login.connect.timeout.ms = null 23:17:10 policy-pap | sasl.login.read.timeout.ms = null 23:17:10 policy-pap | sasl.login.refresh.buffer.seconds = 300 23:17:10 policy-pap | sasl.login.refresh.min.period.seconds = 60 23:17:10 policy-pap | sasl.login.refresh.window.factor = 0.8 23:17:10 policy-pap | sasl.login.refresh.window.jitter = 0.05 23:17:10 policy-pap | sasl.login.retry.backoff.max.ms = 10000 23:17:10 policy-pap | sasl.login.retry.backoff.ms = 100 23:17:10 policy-pap | sasl.mechanism = GSSAPI 23:17:10 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 23:17:10 policy-pap | sasl.oauthbearer.expected.audience = null 23:17:10 policy-pap | sasl.oauthbearer.expected.issuer = null 23:17:10 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:22.779191703Z level=info msg="Executing migration" id="create tag table" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:22.780684815Z level=info msg="Migration successfully executed" id="create tag table" duration=1.492122ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:22.803686949Z level=info msg="Executing migration" id="add index tag.key_value" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:22.805552608Z level=info msg="Migration successfully executed" id="add index tag.key_value" duration=1.864299ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:22.878976053Z level=info msg="Executing migration" id="create login attempt table" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:22.88021928Z level=info msg="Migration successfully executed" id="create login attempt table" duration=1.245447ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:22.982750049Z level=info msg="Executing migration" id="add index login_attempt.username" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:22.984394305Z level=info msg="Migration successfully executed" id="add index login_attempt.username" duration=1.648386ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:23.054746213Z level=info msg="Executing migration" id="drop index IDX_login_attempt_username - v1" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:23.057131534Z level=info msg="Migration successfully executed" id="drop index IDX_login_attempt_username - v1" duration=2.385432ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:23.104197642Z level=info msg="Executing migration" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:23.120584693Z level=info msg="Migration successfully executed" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" duration=16.387381ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:23.144615118Z level=info msg="Executing migration" id="create login_attempt v2" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:23.145827474Z level=info msg="Migration successfully executed" id="create login_attempt v2" duration=1.212436ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:23.229659889Z level=info msg="Executing migration" id="create index IDX_login_attempt_username - v2" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:23.231362976Z level=info msg="Migration successfully executed" id="create index IDX_login_attempt_username - v2" duration=1.700077ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:23.271023436Z level=info msg="Executing migration" id="copy login_attempt v1 to v2" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:23.271506256Z level=info msg="Migration successfully executed" id="copy login_attempt v1 to v2" duration=483.28µs 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:23.325116745Z level=info msg="Executing migration" id="drop login_attempt_tmp_qwerty" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:23.326042465Z level=info msg="Migration successfully executed" id="drop login_attempt_tmp_qwerty" duration=923.27µs 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:23.435858727Z level=info msg="Executing migration" id="create user auth table" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:23.437191236Z level=info msg="Migration successfully executed" id="create user auth table" duration=1.335199ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:23.545960346Z level=info msg="Executing migration" id="create index IDX_user_auth_auth_module_auth_id - v1" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:23.54755871Z level=info msg="Migration successfully executed" id="create index IDX_user_auth_auth_module_auth_id - v1" duration=1.597354ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:23.604838997Z level=info msg="Executing migration" id="alter user_auth.auth_id to length 190" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:23.60496047Z level=info msg="Migration successfully executed" id="alter user_auth.auth_id to length 190" duration=123.853µs 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:23.697726567Z level=info msg="Executing migration" id="Add OAuth access token to user_auth" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:23.705725259Z level=info msg="Migration successfully executed" id="Add OAuth access token to user_auth" duration=7.998651ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:23.735839824Z level=info msg="Executing migration" id="Add OAuth refresh token to user_auth" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:23.744759545Z level=info msg="Migration successfully executed" id="Add OAuth refresh token to user_auth" duration=8.91318ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:23.829592472Z level=info msg="Executing migration" id="Add OAuth token type to user_auth" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:23.837393469Z level=info msg="Migration successfully executed" id="Add OAuth token type to user_auth" duration=7.798977ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:23.856706963Z level=info msg="Executing migration" id="Add OAuth expiry to user_auth" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:23.864717265Z level=info msg="Migration successfully executed" id="Add OAuth expiry to user_auth" duration=8.014162ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:23.9284543Z level=info msg="Executing migration" id="Add index to user_id column in user_auth" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:23.930109706Z level=info msg="Migration successfully executed" id="Add index to user_id column in user_auth" duration=1.659776ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:23.963347588Z level=info msg="Executing migration" id="Add OAuth ID token to user_auth" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:23.97044485Z level=info msg="Migration successfully executed" id="Add OAuth ID token to user_auth" duration=7.095322ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:24.023778872Z level=info msg="Executing migration" id="create server_lock table" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:24.040451959Z level=info msg="Migration successfully executed" id="create server_lock table" duration=16.677377ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:24.070621624Z level=info msg="Executing migration" id="add index server_lock.operation_uid" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:24.072132677Z level=info msg="Migration successfully executed" id="add index server_lock.operation_uid" duration=1.511053ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:24.126592942Z level=info msg="Executing migration" id="create user auth token table" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:24.128163006Z level=info msg="Migration successfully executed" id="create user auth token table" duration=1.574124ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:24.160851695Z level=info msg="Executing migration" id="add unique index user_auth_token.auth_token" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:24.163008441Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.auth_token" duration=2.160316ms 23:17:10 policy-db-migrator | > upgrade 0620-toscanodetemplates_toscanodetemplate.sql 23:17:10 policy-db-migrator | -------------- 23:17:10 kafka | [2024-03-09 23:14:17,273] INFO [TransactionCoordinator id=1] Starting up. (kafka.coordinator.transaction.TransactionCoordinator) 23:17:10 kafka | [2024-03-09 23:14:17,294] INFO Feature ZK node created at path: /feature (kafka.server.FinalizedFeatureChangeListener) 23:17:10 kafka | [2024-03-09 23:14:17,297] INFO [TxnMarkerSenderThread-1]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager) 23:17:10 kafka | [2024-03-09 23:14:17,298] INFO [TransactionCoordinator id=1] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator) 23:17:10 kafka | [2024-03-09 23:14:17,321] INFO [MetadataCache brokerId=1] Updated cache from existing None to latest Features(version=3.6-IV2, finalizedFeatures={}, finalizedFeaturesEpoch=0). (kafka.server.metadata.ZkMetadataCache) 23:17:10 kafka | [2024-03-09 23:14:17,321] INFO [Controller id=1] Registering handlers (kafka.controller.KafkaController) 23:17:10 kafka | [2024-03-09 23:14:17,326] INFO [Controller id=1] Deleting log dir event notifications (kafka.controller.KafkaController) 23:17:10 kafka | [2024-03-09 23:14:17,330] INFO [Controller id=1] Deleting isr change notifications (kafka.controller.KafkaController) 23:17:10 kafka | [2024-03-09 23:14:17,333] INFO [Controller id=1] Initializing controller context (kafka.controller.KafkaController) 23:17:10 kafka | [2024-03-09 23:14:17,343] INFO [ExpirationReaper-1-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:17:10 kafka | [2024-03-09 23:14:17,371] INFO [Controller id=1] Initialized broker epochs cache: HashMap(1 -> 27) (kafka.controller.KafkaController) 23:17:10 kafka | [2024-03-09 23:14:17,372] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread) 23:17:10 kafka | [2024-03-09 23:14:17,379] DEBUG [Controller id=1] Register BrokerModifications handler for Set(1) (kafka.controller.KafkaController) 23:17:10 kafka | [2024-03-09 23:14:17,385] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Enabling request processing. (kafka.network.SocketServer) 23:17:10 kafka | [2024-03-09 23:14:17,387] DEBUG [Channel manager on controller 1]: Controller 1 trying to connect to broker 1 (kafka.controller.ControllerChannelManager) 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:24.182709853Z level=info msg="Executing migration" id="add unique index user_auth_token.prev_auth_token" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:24.184367938Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.prev_auth_token" duration=1.661056ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:24.200576555Z level=info msg="Executing migration" id="add index user_auth_token.user_id" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:24.202384854Z level=info msg="Migration successfully executed" id="add index user_auth_token.user_id" duration=1.814759ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:24.232069479Z level=info msg="Executing migration" id="Add revoked_at to the user auth token" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:24.241048451Z level=info msg="Migration successfully executed" id="Add revoked_at to the user auth token" duration=8.977673ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:24.296921386Z level=info msg="Executing migration" id="add index user_auth_token.revoked_at" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:24.298710584Z level=info msg="Migration successfully executed" id="add index user_auth_token.revoked_at" duration=1.792488ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:24.358651747Z level=info msg="Executing migration" id="create cache_data table" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:24.360297112Z level=info msg="Migration successfully executed" id="create cache_data table" duration=1.652895ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:24.444691908Z level=info msg="Executing migration" id="add unique index cache_data.cache_key" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:24.4462187Z level=info msg="Migration successfully executed" id="add unique index cache_data.cache_key" duration=1.526882ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:24.599051331Z level=info msg="Executing migration" id="create short_url table v1" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:24.600697326Z level=info msg="Migration successfully executed" id="create short_url table v1" duration=1.652666ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:24.679810198Z level=info msg="Executing migration" id="add index short_url.org_id-uid" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:24.680658916Z level=info msg="Migration successfully executed" id="add index short_url.org_id-uid" duration=850.198µs 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:24.736871859Z level=info msg="Executing migration" id="alter table short_url alter column created_by type to bigint" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:24.736981632Z level=info msg="Migration successfully executed" id="alter table short_url alter column created_by type to bigint" duration=110.703µs 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:24.817140216Z level=info msg="Executing migration" id="delete alert_definition table" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:24.81731007Z level=info msg="Migration successfully executed" id="delete alert_definition table" duration=175.264µs 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:24.864606052Z level=info msg="Executing migration" id="recreate alert_definition table" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:24.866265338Z level=info msg="Migration successfully executed" id="recreate alert_definition table" duration=1.658256ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:24.940833293Z level=info msg="Executing migration" id="add index in alert_definition on org_id and title columns" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:24.942525419Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and title columns" duration=1.696206ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:25.015296896Z level=info msg="Executing migration" id="add index in alert_definition on org_id and uid columns" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:25.016921241Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and uid columns" duration=1.624035ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:25.05994468Z level=info msg="Executing migration" id="alter alert_definition table data column to mediumtext in mysql" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:25.060052193Z level=info msg="Migration successfully executed" id="alter alert_definition table data column to mediumtext in mysql" duration=109.223µs 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:25.178933154Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and title columns" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:25.180062688Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and title columns" duration=1.133504ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:25.209808933Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and uid columns" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:25.211343356Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and uid columns" duration=1.538083ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:25.219580381Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and title columns" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:25.221147625Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and title columns" duration=1.566844ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:25.283395285Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and uid columns" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:25.285177293Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and uid columns" duration=1.781668ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:25.366508941Z level=info msg="Executing migration" id="Add column paused in alert_definition" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:25.374667635Z level=info msg="Migration successfully executed" id="Add column paused in alert_definition" duration=8.161434ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:25.473213381Z level=info msg="Executing migration" id="drop alert_definition table" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:25.474680483Z level=info msg="Migration successfully executed" id="drop alert_definition table" duration=1.467132ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:25.528183496Z level=info msg="Executing migration" id="delete alert_definition_version table" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:25.528320609Z level=info msg="Migration successfully executed" id="delete alert_definition_version table" duration=138.143µs 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:25.578753107Z level=info msg="Executing migration" id="recreate alert_definition_version table" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:25.580709588Z level=info msg="Migration successfully executed" id="recreate alert_definition_version table" duration=1.956001ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:25.643165133Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_id and version columns" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:25.646027024Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_id and version columns" duration=2.861111ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:25.654936794Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_uid and version columns" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:25.657380997Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_uid and version columns" duration=2.443963ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:25.703026012Z level=info msg="Executing migration" id="alter alert_definition_version table data column to mediumtext in mysql" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:25.70339978Z level=info msg="Migration successfully executed" id="alter alert_definition_version table data column to mediumtext in mysql" duration=376.968µs 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:25.722351705Z level=info msg="Executing migration" id="drop alert_definition_version table" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:25.7239771Z level=info msg="Migration successfully executed" id="drop alert_definition_version table" duration=1.629945ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:25.733420222Z level=info msg="Executing migration" id="create alert_instance table" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:25.735079937Z level=info msg="Migration successfully executed" id="create alert_instance table" duration=1.665406ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:25.806297599Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:25.808134928Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" duration=1.841519ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:25.821281389Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, current_state columns" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:25.82318197Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, current_state columns" duration=1.90504ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:25.868990319Z level=info msg="Executing migration" id="add column current_state_end to alert_instance" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:25.878597284Z level=info msg="Migration successfully executed" id="add column current_state_end to alert_instance" duration=9.606235ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:25.974227377Z level=info msg="Executing migration" id="remove index def_org_id, def_uid, current_state on alert_instance" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:25.975963665Z level=info msg="Migration successfully executed" id="remove index def_org_id, def_uid, current_state on alert_instance" duration=1.742767ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:26.021804864Z level=info msg="Executing migration" id="remove index def_org_id, current_state on alert_instance" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:26.02351112Z level=info msg="Migration successfully executed" id="remove index def_org_id, current_state on alert_instance" duration=1.711127ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:26.054406019Z level=info msg="Executing migration" id="rename def_org_id to rule_org_id in alert_instance" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:26.085955933Z level=info msg="Migration successfully executed" id="rename def_org_id to rule_org_id in alert_instance" duration=31.548993ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:26.140895905Z level=info msg="Executing migration" id="rename def_uid to rule_uid in alert_instance" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:26.16975333Z level=info msg="Migration successfully executed" id="rename def_uid to rule_uid in alert_instance" duration=28.854625ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:26.206377022Z level=info msg="Executing migration" id="add index rule_org_id, rule_uid, current_state on alert_instance" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:26.208148569Z level=info msg="Migration successfully executed" id="add index rule_org_id, rule_uid, current_state on alert_instance" duration=1.766068ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:26.249710497Z level=info msg="Executing migration" id="add index rule_org_id, current_state on alert_instance" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:26.25129044Z level=info msg="Migration successfully executed" id="add index rule_org_id, current_state on alert_instance" duration=1.575633ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:26.273054705Z level=info msg="Executing migration" id="add current_reason column related to current_state" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:26.281489225Z level=info msg="Migration successfully executed" id="add current_reason column related to current_state" duration=8.4355ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:26.30189848Z level=info msg="Executing migration" id="add result_fingerprint column to alert_instance" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:26.309742918Z level=info msg="Migration successfully executed" id="add result_fingerprint column to alert_instance" duration=7.845938ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:26.44341118Z level=info msg="Executing migration" id="create alert_rule table" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:26.444934013Z level=info msg="Migration successfully executed" id="create alert_rule table" duration=1.523083ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:26.488614445Z level=info msg="Executing migration" id="add index in alert_rule on org_id and title columns" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:26.490166628Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and title columns" duration=1.552103ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:26.511051194Z level=info msg="Executing migration" id="add index in alert_rule on org_id and uid columns" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:26.51274262Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and uid columns" duration=1.691396ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:26.544420666Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:26.546231585Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" duration=1.811119ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:26.617762721Z level=info msg="Executing migration" id="alter alert_rule table data column to mediumtext in mysql" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:26.617868803Z level=info msg="Migration successfully executed" id="alter alert_rule table data column to mediumtext in mysql" duration=107.832µs 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:26.640075387Z level=info msg="Executing migration" id="add column for to alert_rule" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:26.649612321Z level=info msg="Migration successfully executed" id="add column for to alert_rule" duration=9.537854ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:26.751193269Z level=info msg="Executing migration" id="add column annotations to alert_rule" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:26.760659831Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule" duration=9.468222ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:26.969103999Z level=info msg="Executing migration" id="add column labels to alert_rule" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:26.979361668Z level=info msg="Migration successfully executed" id="add column labels to alert_rule" duration=10.256789ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:27.015981029Z level=info msg="Executing migration" id="remove unique index from alert_rule on org_id, title columns" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:27.017645435Z level=info msg="Migration successfully executed" id="remove unique index from alert_rule on org_id, title columns" duration=1.664535ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:27.074254671Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespase_uid and title columns" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:27.076358196Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespase_uid and title columns" duration=2.109165ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:27.106494228Z level=info msg="Executing migration" id="add dashboard_uid column to alert_rule" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:27.11598231Z level=info msg="Migration successfully executed" id="add dashboard_uid column to alert_rule" duration=9.487942ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:27.145693574Z level=info msg="Executing migration" id="add panel_id column to alert_rule" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:27.154136714Z level=info msg="Migration successfully executed" id="add panel_id column to alert_rule" duration=8.4441ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:27.264814063Z level=info msg="Executing migration" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:27.266729494Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" duration=1.914721ms 23:17:10 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:17:10 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:17:10 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 23:17:10 policy-pap | sasl.oauthbearer.scope.claim.name = scope 23:17:10 policy-pap | sasl.oauthbearer.sub.claim.name = sub 23:17:10 policy-pap | sasl.oauthbearer.token.endpoint.url = null 23:17:10 policy-pap | security.protocol = PLAINTEXT 23:17:10 policy-pap | security.providers = null 23:17:10 policy-pap | send.buffer.bytes = 131072 23:17:10 policy-pap | session.timeout.ms = 45000 23:17:10 policy-pap | socket.connection.setup.timeout.max.ms = 30000 23:17:10 policy-pap | socket.connection.setup.timeout.ms = 10000 23:17:10 policy-pap | ssl.cipher.suites = null 23:17:10 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:17:10 policy-pap | ssl.endpoint.identification.algorithm = https 23:17:10 policy-pap | ssl.engine.factory.class = null 23:17:10 policy-pap | ssl.key.password = null 23:17:10 policy-pap | ssl.keymanager.algorithm = SunX509 23:17:10 policy-pap | ssl.keystore.certificate.chain = null 23:17:10 policy-pap | ssl.keystore.key = null 23:17:10 policy-pap | ssl.keystore.location = null 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:27.328725415Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:27.338318449Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule" duration=9.592114ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:27.503908159Z level=info msg="Executing migration" id="add is_paused column to alert_rule table" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:27.513053304Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule table" duration=9.139126ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:27.680903641Z level=info msg="Executing migration" id="fix is_paused column for alert_rule table" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:27.681089775Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule table" duration=191.864µs 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:27.724182734Z level=info msg="Executing migration" id="create alert_rule_version table" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:27.726207527Z level=info msg="Migration successfully executed" id="create alert_rule_version table" duration=2.025933ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:27.814452178Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:27.816247666Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=1.796018ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:27.911307902Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:27.91308561Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" duration=1.777768ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:27.985487223Z level=info msg="Executing migration" id="alter alert_rule_version table data column to mediumtext in mysql" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:27.985675757Z level=info msg="Migration successfully executed" id="alter alert_rule_version table data column to mediumtext in mysql" duration=191.884µs 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:28.16943198Z level=info msg="Executing migration" id="add column for to alert_rule_version" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:28.178729528Z level=info msg="Migration successfully executed" id="add column for to alert_rule_version" duration=9.307628ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:28.239631294Z level=info msg="Executing migration" id="add column annotations to alert_rule_version" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:28.249195168Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule_version" duration=9.567184ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:28.287404681Z level=info msg="Executing migration" id="add column labels to alert_rule_version" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:28.29862316Z level=info msg="Migration successfully executed" id="add column labels to alert_rule_version" duration=11.213309ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:28.385908258Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule_version" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:28.394673394Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule_version" duration=8.769616ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:28.448291266Z level=info msg="Executing migration" id="add is_paused column to alert_rule_versions table" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:28.458154156Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule_versions table" duration=9.86197ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:28.514425634Z level=info msg="Executing migration" id="fix is_paused column for alert_rule_version table" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:28.514745881Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule_version table" duration=323.497µs 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:28.544488744Z level=info msg="Executing migration" id=create_alert_configuration_table 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:28.546161789Z level=info msg="Migration successfully executed" id=create_alert_configuration_table duration=1.675425ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:28.580114922Z level=info msg="Executing migration" id="Add column default in alert_configuration" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:28.588345107Z level=info msg="Migration successfully executed" id="Add column default in alert_configuration" duration=8.235035ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:28.622914183Z level=info msg="Executing migration" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:28.623020146Z level=info msg="Migration successfully executed" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" duration=107.272µs 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:28.711215553Z level=info msg="Executing migration" id="add column org_id in alert_configuration" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:28.719204453Z level=info msg="Migration successfully executed" id="add column org_id in alert_configuration" duration=7.989961ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:28.777989204Z level=info msg="Executing migration" id="add index in alert_configuration table on org_id column" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:28.779942276Z level=info msg="Migration successfully executed" id="add index in alert_configuration table on org_id column" duration=1.960322ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:28.813615963Z level=info msg="Executing migration" id="add configuration_hash column to alert_configuration" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:28.82289457Z level=info msg="Migration successfully executed" id="add configuration_hash column to alert_configuration" duration=9.281937ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:28.891354607Z level=info msg="Executing migration" id=create_ngalert_configuration_table 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:28.892089603Z level=info msg="Migration successfully executed" id=create_ngalert_configuration_table duration=728.096µs 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:28.937079971Z level=info msg="Executing migration" id="add index in ngalert_configuration on org_id column" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:28.93889376Z level=info msg="Migration successfully executed" id="add index in ngalert_configuration on org_id column" duration=1.813018ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:28.967427997Z level=info msg="Executing migration" id="add column send_alerts_to in ngalert_configuration" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:28.971874362Z level=info msg="Migration successfully executed" id="add column send_alerts_to in ngalert_configuration" duration=4.446395ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:28.999664663Z level=info msg="Executing migration" id="create provenance_type table" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:29.001120734Z level=info msg="Migration successfully executed" id="create provenance_type table" duration=1.460021ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:29.090114548Z level=info msg="Executing migration" id="add index to uniquify (record_key, record_type, org_id) columns" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:29.091861955Z level=info msg="Migration successfully executed" id="add index to uniquify (record_key, record_type, org_id) columns" duration=1.748077ms 23:17:10 kafka | [2024-03-09 23:14:17,390] INFO Awaiting socket connections on 0.0.0.0:29092. (kafka.network.DataPlaneAcceptor) 23:17:10 kafka | [2024-03-09 23:14:17,398] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.DataPlaneAcceptor) 23:17:10 kafka | [2024-03-09 23:14:17,405] INFO [RequestSendThread controllerId=1] Starting (kafka.controller.RequestSendThread) 23:17:10 kafka | [2024-03-09 23:14:17,408] INFO [Controller id=1] Currently active brokers in the cluster: Set(1) (kafka.controller.KafkaController) 23:17:10 kafka | [2024-03-09 23:14:17,408] INFO [Controller id=1] Currently shutting brokers in the cluster: HashSet() (kafka.controller.KafkaController) 23:17:10 kafka | [2024-03-09 23:14:17,409] INFO [Controller id=1] Current list of topics in the cluster: HashSet() (kafka.controller.KafkaController) 23:17:10 kafka | [2024-03-09 23:14:17,409] INFO [Controller id=1] Fetching topic deletions in progress (kafka.controller.KafkaController) 23:17:10 kafka | [2024-03-09 23:14:17,409] INFO Kafka version: 7.6.0-ccs (org.apache.kafka.common.utils.AppInfoParser) 23:17:10 kafka | [2024-03-09 23:14:17,409] INFO Kafka commitId: 1991cb733c81d6791626f88253a042b2ec835ab8 (org.apache.kafka.common.utils.AppInfoParser) 23:17:10 kafka | [2024-03-09 23:14:17,409] INFO Kafka startTimeMs: 1710026057403 (org.apache.kafka.common.utils.AppInfoParser) 23:17:10 kafka | [2024-03-09 23:14:17,412] INFO [KafkaServer id=1] started (kafka.server.KafkaServer) 23:17:10 kafka | [2024-03-09 23:14:17,416] INFO [Controller id=1] List of topics to be deleted: (kafka.controller.KafkaController) 23:17:10 kafka | [2024-03-09 23:14:17,416] INFO [Controller id=1] List of topics ineligible for deletion: (kafka.controller.KafkaController) 23:17:10 kafka | [2024-03-09 23:14:17,417] INFO [Controller id=1] Initializing topic deletion manager (kafka.controller.KafkaController) 23:17:10 kafka | [2024-03-09 23:14:17,417] INFO [Topic Deletion Manager 1] Initializing manager with initial deletions: Set(), initial ineligible deletions: HashSet() (kafka.controller.TopicDeletionManager) 23:17:10 kafka | [2024-03-09 23:14:17,418] INFO [Controller id=1] Sending update metadata request (kafka.controller.KafkaController) 23:17:10 kafka | [2024-03-09 23:14:17,422] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 0 partitions (state.change.logger) 23:17:10 kafka | [2024-03-09 23:14:17,430] INFO [ReplicaStateMachine controllerId=1] Initializing replica state (kafka.controller.ZkReplicaStateMachine) 23:17:10 kafka | [2024-03-09 23:14:17,431] INFO [ReplicaStateMachine controllerId=1] Triggering online replica state changes (kafka.controller.ZkReplicaStateMachine) 23:17:10 kafka | [2024-03-09 23:14:17,457] INFO [ReplicaStateMachine controllerId=1] Triggering offline replica state changes (kafka.controller.ZkReplicaStateMachine) 23:17:10 kafka | [2024-03-09 23:14:17,457] DEBUG [ReplicaStateMachine controllerId=1] Started replica state machine with initial state -> HashMap() (kafka.controller.ZkReplicaStateMachine) 23:17:10 kafka | [2024-03-09 23:14:17,458] INFO [PartitionStateMachine controllerId=1] Initializing partition state (kafka.controller.ZkPartitionStateMachine) 23:17:10 kafka | [2024-03-09 23:14:17,471] INFO [PartitionStateMachine controllerId=1] Triggering online partition state changes (kafka.controller.ZkPartitionStateMachine) 23:17:10 kafka | [2024-03-09 23:14:17,473] INFO [RequestSendThread controllerId=1] Controller 1 connected to kafka:9092 (id: 1 rack: null) for sending state change requests (kafka.controller.RequestSendThread) 23:17:10 kafka | [2024-03-09 23:14:17,476] DEBUG [PartitionStateMachine controllerId=1] Started partition state machine with initial state -> HashMap() (kafka.controller.ZkPartitionStateMachine) 23:17:10 kafka | [2024-03-09 23:14:17,477] INFO [Controller id=1] Ready to serve as the new controller with epoch 1 (kafka.controller.KafkaController) 23:17:10 kafka | [2024-03-09 23:14:17,496] INFO [Controller id=1] Partitions undergoing preferred replica election: (kafka.controller.KafkaController) 23:17:10 kafka | [2024-03-09 23:14:17,496] INFO [Controller id=1] Partitions that completed preferred replica election: (kafka.controller.KafkaController) 23:17:10 kafka | [2024-03-09 23:14:17,496] INFO [Controller id=1] Skipping preferred replica election for partitions due to topic deletion: (kafka.controller.KafkaController) 23:17:10 kafka | [2024-03-09 23:14:17,497] INFO [Controller id=1] Resuming preferred replica election for partitions: (kafka.controller.KafkaController) 23:17:10 kafka | [2024-03-09 23:14:17,498] INFO [Controller id=1] Starting replica leader election (PREFERRED) for partitions triggered by ZkTriggered (kafka.controller.KafkaController) 23:17:10 kafka | [2024-03-09 23:14:17,521] INFO [Controller id=1] Starting the controller scheduler (kafka.controller.KafkaController) 23:17:10 kafka | [2024-03-09 23:14:17,540] INFO [zk-broker-1-to-controller-forwarding-channel-manager]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) 23:17:10 kafka | [2024-03-09 23:14:17,550] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 0 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 23:17:10 kafka | [2024-03-09 23:14:17,574] INFO [zk-broker-1-to-controller-alter-partition-channel-manager]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) 23:17:10 kafka | [2024-03-09 23:14:22,526] INFO [Controller id=1] Processing automatic preferred replica leader election (kafka.controller.KafkaController) 23:17:10 kafka | [2024-03-09 23:14:22,526] TRACE [Controller id=1] Checking need to trigger auto leader balancing (kafka.controller.KafkaController) 23:17:10 kafka | [2024-03-09 23:15:11,728] DEBUG [Controller id=1] There is no producerId block yet (Zk path version 0), creating the first block (kafka.controller.KafkaController) 23:17:10 kafka | [2024-03-09 23:15:11,731] INFO Creating topic __consumer_offsets with configuration {compression.type=producer, cleanup.policy=compact, segment.bytes=104857600} and initial partition assignment HashMap(0 -> ArrayBuffer(1), 1 -> ArrayBuffer(1), 2 -> ArrayBuffer(1), 3 -> ArrayBuffer(1), 4 -> ArrayBuffer(1), 5 -> ArrayBuffer(1), 6 -> ArrayBuffer(1), 7 -> ArrayBuffer(1), 8 -> ArrayBuffer(1), 9 -> ArrayBuffer(1), 10 -> ArrayBuffer(1), 11 -> ArrayBuffer(1), 12 -> ArrayBuffer(1), 13 -> ArrayBuffer(1), 14 -> ArrayBuffer(1), 15 -> ArrayBuffer(1), 16 -> ArrayBuffer(1), 17 -> ArrayBuffer(1), 18 -> ArrayBuffer(1), 19 -> ArrayBuffer(1), 20 -> ArrayBuffer(1), 21 -> ArrayBuffer(1), 22 -> ArrayBuffer(1), 23 -> ArrayBuffer(1), 24 -> ArrayBuffer(1), 25 -> ArrayBuffer(1), 26 -> ArrayBuffer(1), 27 -> ArrayBuffer(1), 28 -> ArrayBuffer(1), 29 -> ArrayBuffer(1), 30 -> ArrayBuffer(1), 31 -> ArrayBuffer(1), 32 -> ArrayBuffer(1), 33 -> ArrayBuffer(1), 34 -> ArrayBuffer(1), 35 -> ArrayBuffer(1), 36 -> ArrayBuffer(1), 37 -> ArrayBuffer(1), 38 -> ArrayBuffer(1), 39 -> ArrayBuffer(1), 40 -> ArrayBuffer(1), 41 -> ArrayBuffer(1), 42 -> ArrayBuffer(1), 43 -> ArrayBuffer(1), 44 -> ArrayBuffer(1), 45 -> ArrayBuffer(1), 46 -> ArrayBuffer(1), 47 -> ArrayBuffer(1), 48 -> ArrayBuffer(1), 49 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) 23:17:10 kafka | [2024-03-09 23:15:11,733] INFO Creating topic policy-pdp-pap with configuration {} and initial partition assignment HashMap(0 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) 23:17:10 kafka | [2024-03-09 23:15:11,765] INFO [Controller id=1] Acquired new producerId block ProducerIdsBlock(assignedBrokerId=1, firstProducerId=0, size=1000) by writing to Zk with path version 1 (kafka.controller.KafkaController) 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:29.160925872Z level=info msg="Executing migration" id="create alert_image table" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:29.16226305Z level=info msg="Migration successfully executed" id="create alert_image table" duration=1.337388ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:29.284612252Z level=info msg="Executing migration" id="add unique index on token to alert_image table" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:29.286122334Z level=info msg="Migration successfully executed" id="add unique index on token to alert_image table" duration=1.510692ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:29.337345723Z level=info msg="Executing migration" id="support longer URLs in alert_image table" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:29.337454225Z level=info msg="Migration successfully executed" id="support longer URLs in alert_image table" duration=110.482µs 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:29.397323978Z level=info msg="Executing migration" id=create_alert_configuration_history_table 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:29.398748898Z level=info msg="Migration successfully executed" id=create_alert_configuration_history_table duration=1.42519ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:29.493872861Z level=info msg="Executing migration" id="drop non-unique orgID index on alert_configuration" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:29.495452444Z level=info msg="Migration successfully executed" id="drop non-unique orgID index on alert_configuration" duration=1.579643ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:29.529983788Z level=info msg="Executing migration" id="drop unique orgID index on alert_configuration if exists" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:29.530895998Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop unique orgID index on alert_configuration if exists" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:29.5855484Z level=info msg="Executing migration" id="extract alertmanager configuration history to separate table" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:29.586254295Z level=info msg="Migration successfully executed" id="extract alertmanager configuration history to separate table" duration=706.545µs 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:29.681776116Z level=info msg="Executing migration" id="add unique index on orgID to alert_configuration" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:29.683597604Z level=info msg="Migration successfully executed" id="add unique index on orgID to alert_configuration" duration=1.820079ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:29.78314264Z level=info msg="Executing migration" id="add last_applied column to alert_configuration_history" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:29.793870608Z level=info msg="Migration successfully executed" id="add last_applied column to alert_configuration_history" duration=10.737139ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:29.907435902Z level=info msg="Executing migration" id="create library_element table v1" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:29.90918339Z level=info msg="Migration successfully executed" id="create library_element table v1" duration=1.748718ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:29.969950581Z level=info msg="Executing migration" id="add index library_element org_id-folder_id-name-kind" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:29.97177329Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_id-name-kind" duration=1.824419ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:30.067605816Z level=info msg="Executing migration" id="create library_element_connection table v1" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:30.069066247Z level=info msg="Migration successfully executed" id="create library_element_connection table v1" duration=1.461461ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:30.118938866Z level=info msg="Executing migration" id="add index library_element_connection element_id-kind-connection_id" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:30.120669853Z level=info msg="Migration successfully executed" id="add index library_element_connection element_id-kind-connection_id" duration=1.730286ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:30.198822472Z level=info msg="Executing migration" id="add unique index library_element org_id_uid" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:30.200475797Z level=info msg="Migration successfully executed" id="add unique index library_element org_id_uid" duration=1.653315ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:30.302027034Z level=info msg="Executing migration" id="increase max description length to 2048" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:30.302085195Z level=info msg="Migration successfully executed" id="increase max description length to 2048" duration=61.622µs 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:30.360562937Z level=info msg="Executing migration" id="alter library_element model to mediumtext" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:30.36072151Z level=info msg="Migration successfully executed" id="alter library_element model to mediumtext" duration=161.494µs 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:30.45866525Z level=info msg="Executing migration" id="clone move dashboard alerts to unified alerting" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:30.459258102Z level=info msg="Migration successfully executed" id="clone move dashboard alerts to unified alerting" duration=597.822µs 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:30.502292716Z level=info msg="Executing migration" id="create data_keys table" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:30.503959131Z level=info msg="Migration successfully executed" id="create data_keys table" duration=1.666605ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:30.524408175Z level=info msg="Executing migration" id="create secrets table" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:30.525697933Z level=info msg="Migration successfully executed" id="create secrets table" duration=1.287017ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:30.609471192Z level=info msg="Executing migration" id="rename data_keys name column to id" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:30.643511654Z level=info msg="Migration successfully executed" id="rename data_keys name column to id" duration=34.041303ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:30.71065169Z level=info msg="Executing migration" id="add name column into data_keys" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:30.721351327Z level=info msg="Migration successfully executed" id="add name column into data_keys" duration=10.696377ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:30.793782545Z level=info msg="Executing migration" id="copy data_keys id column values into name" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:30.794411209Z level=info msg="Migration successfully executed" id="copy data_keys id column values into name" duration=625.763µs 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:30.845314779Z level=info msg="Executing migration" id="rename data_keys name column to label" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:30.880897965Z level=info msg="Migration successfully executed" id="rename data_keys name column to label" duration=35.586306ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:30.916212905Z level=info msg="Executing migration" id="rename data_keys id column back to name" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:30.961014576Z level=info msg="Migration successfully executed" id="rename data_keys id column back to name" duration=44.788832ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:31.072851399Z level=info msg="Executing migration" id="create kv_store table v1" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:31.08282445Z level=info msg="Migration successfully executed" id="create kv_store table v1" duration=9.974441ms 23:17:10 kafka | [2024-03-09 23:15:11,805] INFO [Controller id=1] New topics: [Set(policy-pdp-pap, __consumer_offsets)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(policy-pdp-pap,Some(_NbNQG3qR0OxU2OeMb0-vA),Map(policy-pdp-pap-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))), TopicIdReplicaAssignment(__consumer_offsets,Some(wySMvCKtSlSRxF3RSTLuwA),HashMap(__consumer_offsets-22 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-30 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-25 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-35 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-37 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-38 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-13 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-8 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-21 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-4 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-27 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-7 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-9 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-46 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-41 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-33 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-23 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-49 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-47 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-16 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-28 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-31 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-36 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-42 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-3 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-18 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-15 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-24 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-17 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-48 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-19 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-11 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-2 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-43 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-6 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-14 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-20 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-44 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-39 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-12 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-45 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-1 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-5 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-26 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-29 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-34 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-10 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-32 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-40 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) 23:17:10 kafka | [2024-03-09 23:15:11,807] INFO [Controller id=1] New partition creation callback for __consumer_offsets-22,__consumer_offsets-30,__consumer_offsets-25,__consumer_offsets-35,__consumer_offsets-38,__consumer_offsets-13,__consumer_offsets-8,__consumer_offsets-21,__consumer_offsets-4,__consumer_offsets-27,__consumer_offsets-7,__consumer_offsets-9,__consumer_offsets-46,__consumer_offsets-41,__consumer_offsets-33,__consumer_offsets-23,__consumer_offsets-49,__consumer_offsets-47,__consumer_offsets-16,__consumer_offsets-28,__consumer_offsets-31,__consumer_offsets-36,__consumer_offsets-42,__consumer_offsets-3,__consumer_offsets-18,__consumer_offsets-37,policy-pdp-pap-0,__consumer_offsets-15,__consumer_offsets-24,__consumer_offsets-17,__consumer_offsets-48,__consumer_offsets-19,__consumer_offsets-11,__consumer_offsets-2,__consumer_offsets-43,__consumer_offsets-6,__consumer_offsets-14,__consumer_offsets-20,__consumer_offsets-0,__consumer_offsets-44,__consumer_offsets-39,__consumer_offsets-12,__consumer_offsets-45,__consumer_offsets-1,__consumer_offsets-5,__consumer_offsets-26,__consumer_offsets-29,__consumer_offsets-34,__consumer_offsets-10,__consumer_offsets-32,__consumer_offsets-40 (kafka.controller.KafkaController) 23:17:10 kafka | [2024-03-09 23:15:11,809] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,809] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,809] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,810] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,810] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,810] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,810] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,810] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,810] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,810] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,810] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:31.092330572Z level=info msg="Executing migration" id="add index kv_store.org_id-namespace-key" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:31.094420486Z level=info msg="Migration successfully executed" id="add index kv_store.org_id-namespace-key" duration=2.089385ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:31.151817622Z level=info msg="Executing migration" id="update dashboard_uid and panel_id from existing annotations" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:31.152418885Z level=info msg="Migration successfully executed" id="update dashboard_uid and panel_id from existing annotations" duration=603.043µs 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:31.186240383Z level=info msg="Executing migration" id="create permission table" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:31.18799088Z level=info msg="Migration successfully executed" id="create permission table" duration=1.749637ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:31.323349341Z level=info msg="Executing migration" id="add unique index permission.role_id" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:31.32517955Z level=info msg="Migration successfully executed" id="add unique index permission.role_id" duration=1.836129ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:31.376226642Z level=info msg="Executing migration" id="add unique index role_id_action_scope" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:31.378783506Z level=info msg="Migration successfully executed" id="add unique index role_id_action_scope" duration=2.556224ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:31.439226008Z level=info msg="Executing migration" id="create role table" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:31.440759261Z level=info msg="Migration successfully executed" id="create role table" duration=1.535343ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:31.453202934Z level=info msg="Executing migration" id="add column display_name" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:31.462380099Z level=info msg="Migration successfully executed" id="add column display_name" duration=9.172895ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:31.564190448Z level=info msg="Executing migration" id="add column group_name" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:31.573751121Z level=info msg="Migration successfully executed" id="add column group_name" duration=9.568693ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:31.627686305Z level=info msg="Executing migration" id="add index role.org_id" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:31.629556824Z level=info msg="Migration successfully executed" id="add index role.org_id" duration=1.87982ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:31.660753516Z level=info msg="Executing migration" id="add unique index role_org_id_name" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:31.66379701Z level=info msg="Migration successfully executed" id="add unique index role_org_id_name" duration=3.040824ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:31.780806162Z level=info msg="Executing migration" id="add index role_org_id_uid" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:31.782935057Z level=info msg="Migration successfully executed" id="add index role_org_id_uid" duration=2.130075ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:31.887200388Z level=info msg="Executing migration" id="create team role table" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:31.887960844Z level=info msg="Migration successfully executed" id="create team role table" duration=759.736µs 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:31.932126591Z level=info msg="Executing migration" id="add index team_role.org_id" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:31.943261897Z level=info msg="Migration successfully executed" id="add index team_role.org_id" duration=11.137106ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:32.003864232Z level=info msg="Executing migration" id="add unique index team_role_org_id_team_id_role_id" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:32.005877225Z level=info msg="Migration successfully executed" id="add unique index team_role_org_id_team_id_role_id" duration=2.013143ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:32.145280937Z level=info msg="Executing migration" id="add index team_role.team_id" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:32.147133747Z level=info msg="Migration successfully executed" id="add index team_role.team_id" duration=1.85386ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:32.171464992Z level=info msg="Executing migration" id="create user role table" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:32.173125427Z level=info msg="Migration successfully executed" id="create user role table" duration=1.659745ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:32.214010733Z level=info msg="Executing migration" id="add index user_role.org_id" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:32.215982735Z level=info msg="Migration successfully executed" id="add index user_role.org_id" duration=1.973192ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:32.326279191Z level=info msg="Executing migration" id="add unique index user_role_org_id_user_id_role_id" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:32.327508947Z level=info msg="Migration successfully executed" id="add unique index user_role_org_id_user_id_role_id" duration=1.225166ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:32.393037285Z level=info msg="Executing migration" id="add index user_role.user_id" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:32.394910914Z level=info msg="Migration successfully executed" id="add index user_role.user_id" duration=1.866349ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:32.474055421Z level=info msg="Executing migration" id="create builtin role table" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:32.475770387Z level=info msg="Migration successfully executed" id="create builtin role table" duration=1.715206ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:32.565099509Z level=info msg="Executing migration" id="add index builtin_role.role_id" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:32.566803465Z level=info msg="Migration successfully executed" id="add index builtin_role.role_id" duration=1.703596ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:32.61519982Z level=info msg="Executing migration" id="add index builtin_role.name" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:32.625475528Z level=info msg="Migration successfully executed" id="add index builtin_role.name" duration=10.311649ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:32.683138609Z level=info msg="Executing migration" id="Add column org_id to builtin_role table" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:32.696194096Z level=info msg="Migration successfully executed" id="Add column org_id to builtin_role table" duration=13.056517ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:32.714439272Z level=info msg="Executing migration" id="add index builtin_role.org_id" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:32.716184439Z level=info msg="Migration successfully executed" id="add index builtin_role.org_id" duration=1.745647ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:32.754130063Z level=info msg="Executing migration" id="add unique index builtin_role_org_id_role_id_role" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:32.756161066Z level=info msg="Migration successfully executed" id="add unique index builtin_role_org_id_role_id_role" duration=2.029753ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:32.878830984Z level=info msg="Executing migration" id="Remove unique index role_org_id_uid" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:32.880699563Z level=info msg="Migration successfully executed" id="Remove unique index role_org_id_uid" duration=1.872869ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:33.014525008Z level=info msg="Executing migration" id="add unique index role.uid" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:33.016576951Z level=info msg="Migration successfully executed" id="add unique index role.uid" duration=2.053773ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:33.049646521Z level=info msg="Executing migration" id="create seed assignment table" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:33.050917558Z level=info msg="Migration successfully executed" id="create seed assignment table" duration=1.274297ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:33.13566174Z level=info msg="Executing migration" id="add unique index builtin_role_role_name" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:33.136622801Z level=info msg="Migration successfully executed" id="add unique index builtin_role_role_name" duration=961.451µs 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:33.187019976Z level=info msg="Executing migration" id="add column hidden to role table" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:33.198284865Z level=info msg="Migration successfully executed" id="add column hidden to role table" duration=11.266459ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:33.311681673Z level=info msg="Executing migration" id="permission kind migration" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:33.325018455Z level=info msg="Migration successfully executed" id="permission kind migration" duration=13.341713ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:33.36736482Z level=info msg="Executing migration" id="permission attribute migration" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:33.377201299Z level=info msg="Migration successfully executed" id="permission attribute migration" duration=9.839838ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:33.544785802Z level=info msg="Executing migration" id="permission identifier migration" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:33.550468123Z level=info msg="Migration successfully executed" id="permission identifier migration" duration=5.679881ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:33.621904985Z level=info msg="Executing migration" id="add permission identifier index" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:33.62311815Z level=info msg="Migration successfully executed" id="add permission identifier index" duration=1.213425ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:33.673025586Z level=info msg="Executing migration" id="add permission action scope role_id index" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:33.67511194Z level=info msg="Migration successfully executed" id="add permission action scope role_id index" duration=2.087534ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:33.753057529Z level=info msg="Executing migration" id="remove permission role_id action scope index" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:33.754954429Z level=info msg="Migration successfully executed" id="remove permission role_id action scope index" duration=1.89684ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:33.817107474Z level=info msg="Executing migration" id="create query_history table v1" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:33.838859854Z level=info msg="Migration successfully executed" id="create query_history table v1" duration=21.75206ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:33.907786762Z level=info msg="Executing migration" id="add index query_history.org_id-created_by-datasource_uid" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:33.910621152Z level=info msg="Migration successfully executed" id="add index query_history.org_id-created_by-datasource_uid" duration=2.83426ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:33.926405296Z level=info msg="Executing migration" id="alter table query_history alter column created_by type to bigint" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:33.926700092Z level=info msg="Migration successfully executed" id="alter table query_history alter column created_by type to bigint" duration=294.556µs 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:33.957605706Z level=info msg="Executing migration" id="rbac disabled migrator" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:33.957680007Z level=info msg="Migration successfully executed" id="rbac disabled migrator" duration=79.592µs 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:34.023580401Z level=info msg="Executing migration" id="teams permissions migration" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:34.024296076Z level=info msg="Migration successfully executed" id="teams permissions migration" duration=719.275µs 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:34.062449892Z level=info msg="Executing migration" id="dashboard permissions" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:34.063372341Z level=info msg="Migration successfully executed" id="dashboard permissions" duration=923.909µs 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:34.123685486Z level=info msg="Executing migration" id="dashboard permissions uid scopes" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:34.1248387Z level=info msg="Migration successfully executed" id="dashboard permissions uid scopes" duration=1.154334ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:34.178225488Z level=info msg="Executing migration" id="drop managed folder create actions" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:34.178618086Z level=info msg="Migration successfully executed" id="drop managed folder create actions" duration=392.648µs 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:34.220674245Z level=info msg="Executing migration" id="alerting notification permissions" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:34.221493542Z level=info msg="Migration successfully executed" id="alerting notification permissions" duration=818.197µs 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:34.237816947Z level=info msg="Executing migration" id="create query_history_star table v1" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:34.239337249Z level=info msg="Migration successfully executed" id="create query_history_star table v1" duration=1.519872ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:34.395671792Z level=info msg="Executing migration" id="add index query_history.user_id-query_uid" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:34.397385618Z level=info msg="Migration successfully executed" id="add index query_history.user_id-query_uid" duration=1.715116ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:34.563255602Z level=info msg="Executing migration" id="add column org_id in query_history_star" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:34.573912317Z level=info msg="Migration successfully executed" id="add column org_id in query_history_star" duration=10.663555ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:34.606959385Z level=info msg="Executing migration" id="alter table query_history_star_mig column user_id type to bigint" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:34.607216741Z level=info msg="Migration successfully executed" id="alter table query_history_star_mig column user_id type to bigint" duration=255.496µs 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:34.674349319Z level=info msg="Executing migration" id="create correlation table v1" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:34.677135268Z level=info msg="Migration successfully executed" id="create correlation table v1" duration=2.782209ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:34.835238698Z level=info msg="Executing migration" id="add index correlations.uid" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:34.83722336Z level=info msg="Migration successfully executed" id="add index correlations.uid" duration=1.987412ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:34.880585986Z level=info msg="Executing migration" id="add index correlations.source_uid" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:34.882440536Z level=info msg="Migration successfully executed" id="add index correlations.source_uid" duration=1.848229ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:34.969665678Z level=info msg="Executing migration" id="add correlation config column" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:35.011327098Z level=info msg="Migration successfully executed" id="add correlation config column" duration=41.65884ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:35.049007123Z level=info msg="Executing migration" id="drop index IDX_correlation_uid - v1" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:35.051296661Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_uid - v1" duration=2.298108ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:35.154661382Z level=info msg="Executing migration" id="drop index IDX_correlation_source_uid - v1" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:35.156746576Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_source_uid - v1" duration=2.088264ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:35.236934878Z level=info msg="Executing migration" id="Rename table correlation to correlation_tmp_qwerty - v1" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:35.262401875Z level=info msg="Migration successfully executed" id="Rename table correlation to correlation_tmp_qwerty - v1" duration=25.468017ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:35.295940273Z level=info msg="Executing migration" id="create correlation v2" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:35.29813818Z level=info msg="Migration successfully executed" id="create correlation v2" duration=2.197296ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:35.302096693Z level=info msg="Executing migration" id="create index IDX_correlation_uid - v2" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:35.303258368Z level=info msg="Migration successfully executed" id="create index IDX_correlation_uid - v2" duration=1.165874ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:35.307352104Z level=info msg="Executing migration" id="create index IDX_correlation_source_uid - v2" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:35.308720043Z level=info msg="Migration successfully executed" id="create index IDX_correlation_source_uid - v2" duration=1.365909ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:35.31521506Z level=info msg="Executing migration" id="create index IDX_correlation_org_id - v2" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:35.317270083Z level=info msg="Migration successfully executed" id="create index IDX_correlation_org_id - v2" duration=2.054533ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:35.320801788Z level=info msg="Executing migration" id="copy correlation v1 to v2" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:35.321069203Z level=info msg="Migration successfully executed" id="copy correlation v1 to v2" duration=267.905µs 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:35.326671562Z level=info msg="Executing migration" id="drop correlation_tmp_qwerty" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:35.327828466Z level=info msg="Migration successfully executed" id="drop correlation_tmp_qwerty" duration=1.156795ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:35.332067505Z level=info msg="Executing migration" id="add provisioning column" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:35.356010001Z level=info msg="Migration successfully executed" id="add provisioning column" duration=23.940125ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:35.371763503Z level=info msg="Executing migration" id="create entity_events table" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:35.373214744Z level=info msg="Migration successfully executed" id="create entity_events table" duration=1.45062ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:35.380415976Z level=info msg="Executing migration" id="create dashboard public config v1" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:35.381674092Z level=info msg="Migration successfully executed" id="create dashboard public config v1" duration=1.257007ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:35.388001246Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v1" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:35.388749931Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index UQE_dashboard_public_config_uid - v1" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:35.393977842Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:35.39483691Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:35.399174291Z level=info msg="Executing migration" id="Drop old dashboard public config table" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:35.400128931Z level=info msg="Migration successfully executed" id="Drop old dashboard public config table" duration=953.79µs 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:35.404704928Z level=info msg="Executing migration" id="recreate dashboard public config v1" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:35.406608648Z level=info msg="Migration successfully executed" id="recreate dashboard public config v1" duration=1.90341ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:35.41047415Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v1" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:35.412157205Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v1" duration=1.683645ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:35.507383034Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:35.509356036Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" duration=1.972362ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:35.581171901Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v2" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:35.583364798Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_public_config_uid - v2" duration=2.196487ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:35.595724468Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:35.597666019Z level=info msg="Migration successfully executed" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.943461ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:35.60099723Z level=info msg="Executing migration" id="Drop public config table" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:35.602057822Z level=info msg="Migration successfully executed" id="Drop public config table" duration=1.062602ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:35.607454036Z level=info msg="Executing migration" id="Recreate dashboard public config v2" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:35.608767424Z level=info msg="Migration successfully executed" id="Recreate dashboard public config v2" duration=1.312858ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:35.6829883Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v2" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:35.684277857Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v2" duration=1.290907ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:35.737791466Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:35.739705556Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.91351ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:35.81473858Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_access_token - v2" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:35.816937976Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_access_token - v2" duration=2.203067ms 23:17:10 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplates_toscanodetemplate (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCANODETEMPLATES_TOSCANODETEMPLATE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | > upgrade 0630-toscanodetype.sql 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, requirementsVersion VARCHAR(20) NULL, requirementsName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCANODETYPE (name, version)) 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | > upgrade 0640-toscanodetypes.sql 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCANODETYPES (name, version)) 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | > upgrade 0650-toscanodetypes_toscanodetype.sql 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetypes_toscanodetype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCANODETYPES_TOSCANODETYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | > upgrade 0660-toscaparameter.sql 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaparameter (VALUE VARCHAR(255) NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPARAMETER (parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | > upgrade 0670-toscapolicies.sql 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicies (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICIES (name, version)) 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | > upgrade 0680-toscapolicies_toscapolicy.sql 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicies_toscapolicy (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPOLICIES_TOSCAPOLICY (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | > upgrade 0690-toscapolicy.sql 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:35.864265655Z level=info msg="Executing migration" id="Rename table dashboard_public_config to dashboard_public - v2" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:35.891852727Z level=info msg="Migration successfully executed" id="Rename table dashboard_public_config to dashboard_public - v2" duration=27.592333ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:35.962959587Z level=info msg="Executing migration" id="add annotations_enabled column" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:35.974339837Z level=info msg="Migration successfully executed" id="add annotations_enabled column" duration=11.38553ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:35.980385115Z level=info msg="Executing migration" id="add time_selection_enabled column" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:35.989117379Z level=info msg="Migration successfully executed" id="add time_selection_enabled column" duration=8.731844ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:35.99724747Z level=info msg="Executing migration" id="delete orphaned public dashboards" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:35.997437894Z level=info msg="Migration successfully executed" id="delete orphaned public dashboards" duration=190.034µs 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:36.090103297Z level=info msg="Executing migration" id="add share column" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:36.100886035Z level=info msg="Migration successfully executed" id="add share column" duration=10.784938ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:36.221263551Z level=info msg="Executing migration" id="backfill empty share column fields with default of public" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:36.221958786Z level=info msg="Migration successfully executed" id="backfill empty share column fields with default of public" duration=698.235µs 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:36.252624332Z level=info msg="Executing migration" id="create file table" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:36.254751577Z level=info msg="Migration successfully executed" id="create file table" duration=2.126355ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:36.26011899Z level=info msg="Executing migration" id="file table idx: path natural pk" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:36.261962899Z level=info msg="Migration successfully executed" id="file table idx: path natural pk" duration=1.844189ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:36.310124494Z level=info msg="Executing migration" id="file table idx: parent_folder_path_hash fast folder retrieval" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:36.312617657Z level=info msg="Migration successfully executed" id="file table idx: parent_folder_path_hash fast folder retrieval" duration=2.486852ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:36.348701987Z level=info msg="Executing migration" id="create file_meta table" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:36.350110147Z level=info msg="Migration successfully executed" id="create file_meta table" duration=1.407619ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:36.366028412Z level=info msg="Executing migration" id="file table idx: path key" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:36.367904012Z level=info msg="Migration successfully executed" id="file table idx: path key" duration=1.875179ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:36.429609492Z level=info msg="Executing migration" id="set path collation in file table" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:36.429829556Z level=info msg="Migration successfully executed" id="set path collation in file table" duration=226.385µs 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:36.453762501Z level=info msg="Executing migration" id="migrate contents column to mediumblob for MySQL" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:36.453895484Z level=info msg="Migration successfully executed" id="migrate contents column to mediumblob for MySQL" duration=134.653µs 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:36.55387402Z level=info msg="Executing migration" id="managed permissions migration" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:36.555044775Z level=info msg="Migration successfully executed" id="managed permissions migration" duration=1.173055ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:36.593659469Z level=info msg="Executing migration" id="managed folder permissions alert actions migration" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:36.594233361Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions migration" duration=563.222µs 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:36.702583084Z level=info msg="Executing migration" id="RBAC action name migrator" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:36.705177379Z level=info msg="Migration successfully executed" id="RBAC action name migrator" duration=2.599015ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:36.818151039Z level=info msg="Executing migration" id="Add UID column to playlist" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:36.832554223Z level=info msg="Migration successfully executed" id="Add UID column to playlist" duration=14.409174ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:36.907822348Z level=info msg="Executing migration" id="Update uid column values in playlist" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:36.908111834Z level=info msg="Migration successfully executed" id="Update uid column values in playlist" duration=290.726µs 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:37.008338766Z level=info msg="Executing migration" id="Add index for uid in playlist" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:37.010845279Z level=info msg="Migration successfully executed" id="Add index for uid in playlist" duration=2.508753ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:37.070561356Z level=info msg="Executing migration" id="update group index for alert rules" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:37.071276941Z level=info msg="Migration successfully executed" id="update group index for alert rules" duration=716.625µs 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:37.122367047Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated migration" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:37.122744294Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated migration" duration=378.368µs 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:37.201609354Z level=info msg="Executing migration" id="admin only folder/dashboard permission" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:37.202520303Z level=info msg="Migration successfully executed" id="admin only folder/dashboard permission" duration=912.009µs 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:37.30119763Z level=info msg="Executing migration" id="add action column to seed_assignment" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:37.313798545Z level=info msg="Migration successfully executed" id="add action column to seed_assignment" duration=12.598475ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:37.350166751Z level=info msg="Executing migration" id="add scope column to seed_assignment" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:37.378546588Z level=info msg="Migration successfully executed" id="add scope column to seed_assignment" duration=28.380038ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:37.429150263Z level=info msg="Executing migration" id="remove unique index builtin_role_role_name before nullable update" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:37.43136213Z level=info msg="Migration successfully executed" id="remove unique index builtin_role_role_name before nullable update" duration=2.211367ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:37.458078332Z level=info msg="Executing migration" id="update seed_assignment role_name column to nullable" 23:17:10 policy-pap | ssl.keystore.password = null 23:17:10 policy-pap | ssl.keystore.type = JKS 23:17:10 policy-pap | ssl.protocol = TLSv1.3 23:17:10 policy-pap | ssl.provider = null 23:17:10 policy-pap | ssl.secure.random.implementation = null 23:17:10 policy-pap | ssl.trustmanager.algorithm = PKIX 23:17:10 policy-pap | ssl.truststore.certificates = null 23:17:10 policy-pap | ssl.truststore.location = null 23:17:10 policy-pap | ssl.truststore.password = null 23:17:10 policy-pap | ssl.truststore.type = JKS 23:17:10 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:17:10 policy-pap | 23:17:10 policy-pap | [2024-03-09T23:15:11.168+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 23:17:10 policy-pap | [2024-03-09T23:15:11.168+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 23:17:10 policy-pap | [2024-03-09T23:15:11.168+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1710026111168 23:17:10 policy-pap | [2024-03-09T23:15:11.169+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-99d6a240-42a3-48b7-904b-df55de280eab-3, groupId=99d6a240-42a3-48b7-904b-df55de280eab] Subscribed to topic(s): policy-pdp-pap 23:17:10 policy-pap | [2024-03-09T23:15:11.169+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Message Dispatcher 23:17:10 policy-pap | [2024-03-09T23:15:11.169+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=bb688e9f-d60d-4a93-a626-15e17bdf9d14, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@1f1e15de 23:17:10 policy-pap | [2024-03-09T23:15:11.170+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=bb688e9f-d60d-4a93-a626-15e17bdf9d14, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 23:17:10 policy-pap | [2024-03-09T23:15:11.170+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 23:17:10 policy-pap | allow.auto.create.topics = true 23:17:10 policy-pap | auto.commit.interval.ms = 5000 23:17:10 policy-pap | auto.include.jmx.reporter = true 23:17:10 policy-pap | auto.offset.reset = latest 23:17:10 policy-pap | bootstrap.servers = [kafka:9092] 23:17:10 policy-pap | check.crcs = true 23:17:10 policy-pap | client.dns.lookup = use_all_dns_ips 23:17:10 policy-pap | client.id = consumer-policy-pap-4 23:17:10 policy-pap | client.rack = 23:17:10 policy-pap | connections.max.idle.ms = 540000 23:17:10 policy-pap | default.api.timeout.ms = 60000 23:17:10 policy-pap | enable.auto.commit = true 23:17:10 policy-pap | exclude.internal.topics = true 23:17:10 policy-pap | fetch.max.bytes = 52428800 23:17:10 policy-pap | fetch.max.wait.ms = 500 23:17:10 policy-pap | fetch.min.bytes = 1 23:17:10 policy-pap | group.id = policy-pap 23:17:10 policy-pap | group.instance.id = null 23:17:10 policy-pap | heartbeat.interval.ms = 3000 23:17:10 policy-pap | interceptor.classes = [] 23:17:10 policy-pap | internal.leave.group.on.close = true 23:17:10 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 23:17:10 policy-pap | isolation.level = read_uncommitted 23:17:10 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:17:10 policy-pap | max.partition.fetch.bytes = 1048576 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:37.53213957Z level=info msg="Migration successfully executed" id="update seed_assignment role_name column to nullable" duration=74.060788ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:37.547287499Z level=info msg="Executing migration" id="add unique index builtin_role_name back" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:37.5492321Z level=info msg="Migration successfully executed" id="add unique index builtin_role_name back" duration=1.936641ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:37.575249198Z level=info msg="Executing migration" id="add unique index builtin_role_action_scope" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:37.577400573Z level=info msg="Migration successfully executed" id="add unique index builtin_role_action_scope" duration=2.149915ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:37.660060603Z level=info msg="Executing migration" id="add primary key to seed_assigment" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:37.688846859Z level=info msg="Migration successfully executed" id="add primary key to seed_assigment" duration=28.787356ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:37.754799917Z level=info msg="Executing migration" id="add origin column to seed_assignment" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:37.76824396Z level=info msg="Migration successfully executed" id="add origin column to seed_assignment" duration=13.445343ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:37.806143218Z level=info msg="Executing migration" id="add origin to plugin seed_assignment" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:37.806842392Z level=info msg="Migration successfully executed" id="add origin to plugin seed_assignment" duration=699.314µs 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:37.840824017Z level=info msg="Executing migration" id="prevent seeding OnCall access" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:37.841312648Z level=info msg="Migration successfully executed" id="prevent seeding OnCall access" duration=488.111µs 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:37.874563907Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated fixed migration" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:37.875105749Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated fixed migration" duration=545.352µs 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:37.903248931Z level=info msg="Executing migration" id="managed folder permissions library panel actions migration" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:37.903814043Z level=info msg="Migration successfully executed" id="managed folder permissions library panel actions migration" duration=564.722µs 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:37.965225446Z level=info msg="Executing migration" id="migrate external alertmanagers to datsourcse" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:37.965799348Z level=info msg="Migration successfully executed" id="migrate external alertmanagers to datsourcse" duration=573.733µs 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:38.073065764Z level=info msg="Executing migration" id="create folder table" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:38.0747808Z level=info msg="Migration successfully executed" id="create folder table" duration=1.715296ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:38.116741032Z level=info msg="Executing migration" id="Add index for parent_uid" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:38.118852066Z level=info msg="Migration successfully executed" id="Add index for parent_uid" duration=2.110994ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:38.162169736Z level=info msg="Executing migration" id="Add unique index for folder.uid and folder.org_id" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:38.164158028Z level=info msg="Migration successfully executed" id="Add unique index for folder.uid and folder.org_id" duration=1.988432ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:38.218606003Z level=info msg="Executing migration" id="Update folder title length" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:38.218736716Z level=info msg="Migration successfully executed" id="Update folder title length" duration=132.133µs 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:38.279033233Z level=info msg="Executing migration" id="Add unique index for folder.title and folder.parent_uid" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:38.280339961Z level=info msg="Migration successfully executed" id="Add unique index for folder.title and folder.parent_uid" duration=1.306768ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:38.407444022Z level=info msg="Executing migration" id="Remove unique index for folder.title and folder.parent_uid" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:38.409820132Z level=info msg="Migration successfully executed" id="Remove unique index for folder.title and folder.parent_uid" duration=2.38158ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:38.417131666Z level=info msg="Executing migration" id="Add unique index for title, parent_uid, and org_id" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:38.418497185Z level=info msg="Migration successfully executed" id="Add unique index for title, parent_uid, and org_id" duration=1.364739ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:38.466687948Z level=info msg="Executing migration" id="Sync dashboard and folder table" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:38.467987245Z level=info msg="Migration successfully executed" id="Sync dashboard and folder table" duration=1.295607ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:38.557058707Z level=info msg="Executing migration" id="Remove ghost folders from the folder table" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:38.557873114Z level=info msg="Migration successfully executed" id="Remove ghost folders from the folder table" duration=828.457µs 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:38.596529537Z level=info msg="Executing migration" id="Remove unique index UQE_folder_uid_org_id" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:38.598675102Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_uid_org_id" duration=2.146045ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:38.659593362Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_uid" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:38.661580834Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_uid" duration=1.991512ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:38.683839712Z level=info msg="Executing migration" id="Remove unique index UQE_folder_title_parent_uid_org_id" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:38.685852374Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_title_parent_uid_org_id" duration=2.018142ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:38.78125875Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_parent_uid_title" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:38.784182471Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_parent_uid_title" duration=2.928511ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:38.85119398Z level=info msg="Executing migration" id="Remove index IDX_folder_parent_uid_org_id" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:38.852578329Z level=info msg="Migration successfully executed" id="Remove index IDX_folder_parent_uid_org_id" duration=1.387309ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:38.965685106Z level=info msg="Executing migration" id="create anon_device table" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:38.967919513Z level=info msg="Migration successfully executed" id="create anon_device table" duration=2.239617ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:39.037662589Z level=info msg="Executing migration" id="add unique index anon_device.device_id" 23:17:10 policy-pap | max.poll.interval.ms = 300000 23:17:10 policy-pap | max.poll.records = 500 23:17:10 policy-pap | metadata.max.age.ms = 300000 23:17:10 policy-pap | metric.reporters = [] 23:17:10 policy-pap | metrics.num.samples = 2 23:17:10 policy-pap | metrics.recording.level = INFO 23:17:10 policy-pap | metrics.sample.window.ms = 30000 23:17:10 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 23:17:10 policy-pap | receive.buffer.bytes = 65536 23:17:10 policy-pap | reconnect.backoff.max.ms = 1000 23:17:10 policy-pap | reconnect.backoff.ms = 50 23:17:10 policy-pap | request.timeout.ms = 30000 23:17:10 policy-pap | retry.backoff.ms = 100 23:17:10 policy-pap | sasl.client.callback.handler.class = null 23:17:10 policy-pap | sasl.jaas.config = null 23:17:10 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:17:10 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 23:17:10 policy-pap | sasl.kerberos.service.name = null 23:17:10 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 23:17:10 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 23:17:10 policy-pap | sasl.login.callback.handler.class = null 23:17:10 policy-pap | sasl.login.class = null 23:17:10 policy-pap | sasl.login.connect.timeout.ms = null 23:17:10 policy-pap | sasl.login.read.timeout.ms = null 23:17:10 policy-pap | sasl.login.refresh.buffer.seconds = 300 23:17:10 policy-pap | sasl.login.refresh.min.period.seconds = 60 23:17:10 policy-pap | sasl.login.refresh.window.factor = 0.8 23:17:10 policy-pap | sasl.login.refresh.window.jitter = 0.05 23:17:10 policy-pap | sasl.login.retry.backoff.max.ms = 10000 23:17:10 policy-pap | sasl.login.retry.backoff.ms = 100 23:17:10 policy-pap | sasl.mechanism = GSSAPI 23:17:10 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 23:17:10 policy-pap | sasl.oauthbearer.expected.audience = null 23:17:10 policy-pap | sasl.oauthbearer.expected.issuer = null 23:17:10 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:17:10 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:17:10 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:17:10 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 23:17:10 policy-pap | sasl.oauthbearer.scope.claim.name = scope 23:17:10 policy-pap | sasl.oauthbearer.sub.claim.name = sub 23:17:10 policy-pap | sasl.oauthbearer.token.endpoint.url = null 23:17:10 policy-pap | security.protocol = PLAINTEXT 23:17:10 policy-pap | security.providers = null 23:17:10 policy-pap | send.buffer.bytes = 131072 23:17:10 policy-pap | session.timeout.ms = 45000 23:17:10 policy-pap | socket.connection.setup.timeout.max.ms = 30000 23:17:10 policy-pap | socket.connection.setup.timeout.ms = 10000 23:17:10 policy-pap | ssl.cipher.suites = null 23:17:10 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:17:10 policy-pap | ssl.endpoint.identification.algorithm = https 23:17:10 policy-pap | ssl.engine.factory.class = null 23:17:10 policy-pap | ssl.key.password = null 23:17:10 policy-pap | ssl.keymanager.algorithm = SunX509 23:17:10 policy-pap | ssl.keystore.certificate.chain = null 23:17:10 policy-pap | ssl.keystore.key = null 23:17:10 policy-pap | ssl.keystore.location = null 23:17:10 policy-pap | ssl.keystore.password = null 23:17:10 policy-pap | ssl.keystore.type = JKS 23:17:10 policy-pap | ssl.protocol = TLSv1.3 23:17:10 policy-pap | ssl.provider = null 23:17:10 policy-pap | ssl.secure.random.implementation = null 23:17:10 policy-pap | ssl.trustmanager.algorithm = PKIX 23:17:10 policy-pap | ssl.truststore.certificates = null 23:17:10 policy-pap | ssl.truststore.location = null 23:17:10 policy-pap | ssl.truststore.password = null 23:17:10 policy-pap | ssl.truststore.type = JKS 23:17:10 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:17:10 policy-pap | 23:17:10 policy-pap | [2024-03-09T23:15:11.176+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 23:17:10 policy-pap | [2024-03-09T23:15:11.176+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 23:17:10 policy-pap | [2024-03-09T23:15:11.176+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1710026111176 23:17:10 policy-pap | [2024-03-09T23:15:11.176+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap 23:17:10 policy-pap | [2024-03-09T23:15:11.176+00:00|INFO|ServiceManager|main] Policy PAP starting topics 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:39.040732833Z level=info msg="Migration successfully executed" id="add unique index anon_device.device_id" duration=3.075105ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:39.133305047Z level=info msg="Executing migration" id="add index anon_device.updated_at" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:39.135811159Z level=info msg="Migration successfully executed" id="add index anon_device.updated_at" duration=2.508633ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:39.153585912Z level=info msg="Executing migration" id="create signing_key table" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:39.155194226Z level=info msg="Migration successfully executed" id="create signing_key table" duration=1.608544ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:39.24446528Z level=info msg="Executing migration" id="add unique index signing_key.key_id" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:39.246006452Z level=info msg="Migration successfully executed" id="add unique index signing_key.key_id" duration=1.548622ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:39.264166654Z level=info msg="Executing migration" id="set legacy alert migration status in kvstore" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:39.26541546Z level=info msg="Migration successfully executed" id="set legacy alert migration status in kvstore" duration=1.248776ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:39.314599053Z level=info msg="Executing migration" id="migrate record of created folders during legacy migration to kvstore" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:39.315263456Z level=info msg="Migration successfully executed" id="migrate record of created folders during legacy migration to kvstore" duration=662.334µs 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:39.340361213Z level=info msg="Executing migration" id="Add folder_uid for dashboard" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:39.351191251Z level=info msg="Migration successfully executed" id="Add folder_uid for dashboard" duration=10.831718ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:39.398292229Z level=info msg="Executing migration" id="Populate dashboard folder_uid column" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:39.399688959Z level=info msg="Migration successfully executed" id="Populate dashboard folder_uid column" duration=1.40011ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:39.464769995Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:39.465843838Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title" duration=1.070782ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:39.50884906Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_id_title" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:39.51075383Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_id_title" duration=1.90476ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:39.585193733Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_uid_title" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:39.587228516Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_uid_title" duration=2.039533ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:39.622715841Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:39.625095671Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" duration=2.3774ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:39.633326434Z level=info msg="Executing migration" id="Restore index for dashboard_org_id_folder_id_title" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:39.634653092Z level=info msg="Migration successfully executed" id="Restore index for dashboard_org_id_folder_id_title" duration=1.325708ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:39.704000788Z level=info msg="Executing migration" id="create sso_setting table" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:39.705939048Z level=info msg="Migration successfully executed" id="create sso_setting table" duration=1.938531ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:39.750489893Z level=info msg="Executing migration" id="copy kvstore migration status to each org" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:39.752014755Z level=info msg="Migration successfully executed" id="copy kvstore migration status to each org" duration=1.525192ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:39.758658945Z level=info msg="Executing migration" id="add back entry for orgid=0 migrated status" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:39.758938221Z level=info msg="Migration successfully executed" id="add back entry for orgid=0 migrated status" duration=280.806µs 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:39.764529378Z level=info msg="Executing migration" id="alter kv_store.value to longtext" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:39.764599639Z level=info msg="Migration successfully executed" id="alter kv_store.value to longtext" duration=69.961µs 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:39.771636067Z level=info msg="Executing migration" id="add notification_settings column to alert_rule table" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:39.782282811Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule table" duration=10.647354ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:39.865866156Z level=info msg="Executing migration" id="add notification_settings column to alert_rule_version table" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:39.87368209Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule_version table" duration=7.817834ms 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:39.964078517Z level=info msg="Executing migration" id="removing scope from alert.instances:read action migration" 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:39.964782582Z level=info msg="Migration successfully executed" id="removing scope from alert.instances:read action migration" duration=711.075µs 23:17:10 grafana | logger=migrator t=2024-03-09T23:14:40.009892199Z level=info msg="migrations completed" performed=547 skipped=0 duration=33.137571014s 23:17:10 grafana | logger=sqlstore t=2024-03-09T23:14:40.024113277Z level=info msg="Created default admin" user=admin 23:17:10 grafana | logger=sqlstore t=2024-03-09T23:14:40.024310861Z level=info msg="Created default organization" 23:17:10 grafana | logger=secrets t=2024-03-09T23:14:40.091963699Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1 23:17:10 grafana | logger=plugin.store t=2024-03-09T23:14:40.112760966Z level=info msg="Loading plugins..." 23:17:10 grafana | logger=local.finder t=2024-03-09T23:14:40.163341886Z level=warn msg="Skipping finding plugins as directory does not exist" path=/usr/share/grafana/plugins-bundled 23:17:10 grafana | logger=plugin.store t=2024-03-09T23:14:40.163376127Z level=info msg="Plugins loaded" count=55 duration=50.615811ms 23:17:10 grafana | logger=query_data t=2024-03-09T23:14:40.166163115Z level=info msg="Query Service initialization" 23:17:10 grafana | logger=live.push_http t=2024-03-09T23:14:40.169703399Z level=info msg="Live Push Gateway initialization" 23:17:10 grafana | logger=ngalert.migration t=2024-03-09T23:14:40.243567268Z level=info msg=Starting 23:17:10 grafana | logger=ngalert.migration t=2024-03-09T23:14:40.244491067Z level=info msg="Applying transition" currentType=Legacy desiredType=UnifiedAlerting cleanOnDowngrade=false cleanOnUpgrade=false 23:17:10 grafana | logger=ngalert.migration orgID=1 t=2024-03-09T23:14:40.245313785Z level=info msg="Migrating alerts for organisation" 23:17:10 grafana | logger=ngalert.migration orgID=1 t=2024-03-09T23:14:40.24651956Z level=info msg="Alerts found to migrate" alerts=0 23:17:10 grafana | logger=ngalert.migration t=2024-03-09T23:14:40.249123145Z level=info msg="Completed alerting migration" 23:17:10 policy-pap | [2024-03-09T23:15:11.177+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=bb688e9f-d60d-4a93-a626-15e17bdf9d14, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-heartbeat,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 23:17:10 policy-pap | [2024-03-09T23:15:11.177+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=99d6a240-42a3-48b7-904b-df55de280eab, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 23:17:10 policy-pap | [2024-03-09T23:15:11.193+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=d3c89308-80de-4f00-a546-0ff7392747d6, alive=false, publisher=null]]: starting 23:17:10 policy-pap | [2024-03-09T23:15:11.218+00:00|INFO|ProducerConfig|main] ProducerConfig values: 23:17:10 policy-pap | acks = -1 23:17:10 policy-pap | auto.include.jmx.reporter = true 23:17:10 policy-pap | batch.size = 16384 23:17:10 policy-pap | bootstrap.servers = [kafka:9092] 23:17:10 policy-pap | buffer.memory = 33554432 23:17:10 policy-pap | client.dns.lookup = use_all_dns_ips 23:17:10 policy-pap | client.id = producer-1 23:17:10 policy-pap | compression.type = none 23:17:10 policy-pap | connections.max.idle.ms = 540000 23:17:10 policy-pap | delivery.timeout.ms = 120000 23:17:10 policy-pap | enable.idempotence = true 23:17:10 policy-pap | interceptor.classes = [] 23:17:10 policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer 23:17:10 policy-pap | linger.ms = 0 23:17:10 policy-pap | max.block.ms = 60000 23:17:10 policy-pap | max.in.flight.requests.per.connection = 5 23:17:10 policy-pap | max.request.size = 1048576 23:17:10 policy-pap | metadata.max.age.ms = 300000 23:17:10 policy-pap | metadata.max.idle.ms = 300000 23:17:10 policy-pap | metric.reporters = [] 23:17:10 policy-pap | metrics.num.samples = 2 23:17:10 policy-pap | metrics.recording.level = INFO 23:17:10 policy-pap | metrics.sample.window.ms = 30000 23:17:10 policy-pap | partitioner.adaptive.partitioning.enable = true 23:17:10 policy-pap | partitioner.availability.timeout.ms = 0 23:17:10 policy-pap | partitioner.class = null 23:17:10 policy-pap | partitioner.ignore.keys = false 23:17:10 policy-pap | receive.buffer.bytes = 32768 23:17:10 policy-pap | reconnect.backoff.max.ms = 1000 23:17:10 policy-pap | reconnect.backoff.ms = 50 23:17:10 policy-pap | request.timeout.ms = 30000 23:17:10 policy-pap | retries = 2147483647 23:17:10 policy-pap | retry.backoff.ms = 100 23:17:10 policy-pap | sasl.client.callback.handler.class = null 23:17:10 policy-pap | sasl.jaas.config = null 23:17:10 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:17:10 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 23:17:10 policy-pap | sasl.kerberos.service.name = null 23:17:10 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 23:17:10 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 23:17:10 policy-pap | sasl.login.callback.handler.class = null 23:17:10 policy-pap | sasl.login.class = null 23:17:10 policy-pap | sasl.login.connect.timeout.ms = null 23:17:10 policy-pap | sasl.login.read.timeout.ms = null 23:17:10 policy-pap | sasl.login.refresh.buffer.seconds = 300 23:17:10 policy-pap | sasl.login.refresh.min.period.seconds = 60 23:17:10 policy-pap | sasl.login.refresh.window.factor = 0.8 23:17:10 policy-pap | sasl.login.refresh.window.jitter = 0.05 23:17:10 policy-pap | sasl.login.retry.backoff.max.ms = 10000 23:17:10 policy-pap | sasl.login.retry.backoff.ms = 100 23:17:10 policy-pap | sasl.mechanism = GSSAPI 23:17:10 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 23:17:10 policy-pap | sasl.oauthbearer.expected.audience = null 23:17:10 policy-pap | sasl.oauthbearer.expected.issuer = null 23:17:10 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:17:10 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:17:10 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:17:10 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 23:17:10 policy-pap | sasl.oauthbearer.scope.claim.name = scope 23:17:10 policy-pap | sasl.oauthbearer.sub.claim.name = sub 23:17:10 policy-pap | sasl.oauthbearer.token.endpoint.url = null 23:17:10 policy-pap | security.protocol = PLAINTEXT 23:17:10 policy-pap | security.providers = null 23:17:10 policy-pap | send.buffer.bytes = 131072 23:17:10 policy-pap | socket.connection.setup.timeout.max.ms = 30000 23:17:10 policy-pap | socket.connection.setup.timeout.ms = 10000 23:17:10 policy-pap | ssl.cipher.suites = null 23:17:10 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:17:10 policy-pap | ssl.endpoint.identification.algorithm = https 23:17:10 policy-pap | ssl.engine.factory.class = null 23:17:10 policy-pap | ssl.key.password = null 23:17:10 grafana | logger=ngalert.state.manager t=2024-03-09T23:14:40.373977022Z level=info msg="Running in alternative execution of Error/NoData mode" 23:17:10 grafana | logger=infra.usagestats.collector t=2024-03-09T23:14:40.377765692Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2 23:17:10 grafana | logger=provisioning.datasources t=2024-03-09T23:14:40.381308826Z level=info msg="inserting datasource from configuration" name=PolicyPrometheus uid=dkSf71fnz 23:17:10 grafana | logger=provisioning.alerting t=2024-03-09T23:14:40.418531657Z level=info msg="starting to provision alerting" 23:17:10 grafana | logger=provisioning.alerting t=2024-03-09T23:14:40.418554847Z level=info msg="finished to provision alerting" 23:17:10 grafana | logger=grafanaStorageLogger t=2024-03-09T23:14:40.419474466Z level=info msg="Storage starting" 23:17:10 grafana | logger=ngalert.state.manager t=2024-03-09T23:14:40.421586971Z level=info msg="Warming state cache for startup" 23:17:10 grafana | logger=ngalert.multiorg.alertmanager t=2024-03-09T23:14:40.421908177Z level=info msg="Starting MultiOrg Alertmanager" 23:17:10 grafana | logger=http.server t=2024-03-09T23:14:40.425228827Z level=info msg="HTTP Server Listen" address=[::]:3000 protocol=http subUrl= socket= 23:17:10 grafana | logger=sqlstore.transactions t=2024-03-09T23:14:40.440496067Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked" 23:17:10 grafana | logger=plugins.update.checker t=2024-03-09T23:14:40.553975095Z level=info msg="Update check succeeded" duration=135.186223ms 23:17:10 grafana | logger=grafana.update.checker t=2024-03-09T23:14:40.605811682Z level=info msg="Update check succeeded" duration=187.069301ms 23:17:10 grafana | logger=provisioning.dashboard t=2024-03-09T23:14:40.611606984Z level=info msg="starting to provision dashboards" 23:17:10 grafana | logger=grafana-apiserver t=2024-03-09T23:14:40.678638089Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager" 23:17:10 grafana | logger=grafana-apiserver t=2024-03-09T23:14:40.679179821Z level=info msg="Adding GroupVersion featuretoggle.grafana.app v0alpha1 to ResourceManager" 23:17:10 grafana | logger=ngalert.state.manager t=2024-03-09T23:14:41.120303337Z level=info msg="State cache has been initialized" states=0 duration=698.711296ms 23:17:10 grafana | logger=ngalert.scheduler t=2024-03-09T23:14:41.120352848Z level=info msg="Starting scheduler" tickInterval=10s maxAttempts=1 23:17:10 grafana | logger=ticker t=2024-03-09T23:14:41.120407419Z level=info msg=starting first_tick=2024-03-09T23:14:50Z 23:17:10 grafana | logger=provisioning.dashboard t=2024-03-09T23:14:41.306304952Z level=info msg="finished to provision dashboards" 23:17:10 grafana | logger=infra.usagestats t=2024-03-09T23:15:52.430171829Z level=info msg="Usage stats are ready to report" 23:17:10 kafka | [2024-03-09 23:15:11,810] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,810] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,810] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,810] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,810] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,810] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,810] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,810] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,810] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,810] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,810] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,810] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,810] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,810] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,810] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,810] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,811] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,811] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,811] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,811] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,811] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,811] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,811] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,811] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,811] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,811] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,811] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,811] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,811] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,811] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,811] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,811] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,811] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,811] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,811] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,811] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,811] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,811] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,811] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,811] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,811] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,817] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NonExistentReplica to NewReplica (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,817] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NonExistentReplica to NewReplica (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,817] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NonExistentReplica to NewReplica (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,817] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NonExistentReplica to NewReplica (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,817] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NonExistentReplica to NewReplica (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,817] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NonExistentReplica to NewReplica (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,817] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NonExistentReplica to NewReplica (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,817] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NonExistentReplica to NewReplica (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,817] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NonExistentReplica to NewReplica (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,817] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NonExistentReplica to NewReplica (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,817] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NonExistentReplica to NewReplica (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,817] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NonExistentReplica to NewReplica (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,817] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NonExistentReplica to NewReplica (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,817] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NonExistentReplica to NewReplica (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,817] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NonExistentReplica to NewReplica (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,817] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NonExistentReplica to NewReplica (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,817] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NonExistentReplica to NewReplica (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,817] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NonExistentReplica to NewReplica (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,817] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NonExistentReplica to NewReplica (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,817] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NonExistentReplica to NewReplica (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,817] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NonExistentReplica to NewReplica (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,817] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NonExistentReplica to NewReplica (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,817] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NonExistentReplica to NewReplica (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,817] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NonExistentReplica to NewReplica (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,817] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NonExistentReplica to NewReplica (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,817] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NonExistentReplica to NewReplica (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,817] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NonExistentReplica to NewReplica (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,818] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NonExistentReplica to NewReplica (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,818] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NonExistentReplica to NewReplica (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,818] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NonExistentReplica to NewReplica (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,818] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NonExistentReplica to NewReplica (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,818] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NonExistentReplica to NewReplica (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,818] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NonExistentReplica to NewReplica (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,818] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NonExistentReplica to NewReplica (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,818] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NonExistentReplica to NewReplica (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,818] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NonExistentReplica to NewReplica (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,818] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NonExistentReplica to NewReplica (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,818] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NonExistentReplica to NewReplica (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,818] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NonExistentReplica to NewReplica (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,818] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NonExistentReplica to NewReplica (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,818] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NonExistentReplica to NewReplica (state.change.logger) 23:17:10 policy-pap | ssl.keymanager.algorithm = SunX509 23:17:10 policy-pap | ssl.keystore.certificate.chain = null 23:17:10 policy-pap | ssl.keystore.key = null 23:17:10 policy-pap | ssl.keystore.location = null 23:17:10 policy-pap | ssl.keystore.password = null 23:17:10 policy-pap | ssl.keystore.type = JKS 23:17:10 policy-pap | ssl.protocol = TLSv1.3 23:17:10 policy-pap | ssl.provider = null 23:17:10 policy-pap | ssl.secure.random.implementation = null 23:17:10 policy-pap | ssl.trustmanager.algorithm = PKIX 23:17:10 policy-pap | ssl.truststore.certificates = null 23:17:10 policy-pap | ssl.truststore.location = null 23:17:10 policy-pap | ssl.truststore.password = null 23:17:10 policy-pap | ssl.truststore.type = JKS 23:17:10 policy-pap | transaction.timeout.ms = 60000 23:17:10 policy-pap | transactional.id = null 23:17:10 policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer 23:17:10 policy-pap | 23:17:10 policy-pap | [2024-03-09T23:15:11.233+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. 23:17:10 policy-pap | [2024-03-09T23:15:11.252+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 23:17:10 policy-pap | [2024-03-09T23:15:11.252+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 23:17:10 policy-pap | [2024-03-09T23:15:11.252+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1710026111252 23:17:10 policy-pap | [2024-03-09T23:15:11.252+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=d3c89308-80de-4f00-a546-0ff7392747d6, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created 23:17:10 policy-pap | [2024-03-09T23:15:11.252+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=4876646e-1d8e-444e-9de6-a52f477d6518, alive=false, publisher=null]]: starting 23:17:10 policy-pap | [2024-03-09T23:15:11.253+00:00|INFO|ProducerConfig|main] ProducerConfig values: 23:17:10 policy-pap | acks = -1 23:17:10 policy-pap | auto.include.jmx.reporter = true 23:17:10 policy-pap | batch.size = 16384 23:17:10 policy-pap | bootstrap.servers = [kafka:9092] 23:17:10 policy-pap | buffer.memory = 33554432 23:17:10 policy-pap | client.dns.lookup = use_all_dns_ips 23:17:10 policy-pap | client.id = producer-2 23:17:10 policy-pap | compression.type = none 23:17:10 policy-pap | connections.max.idle.ms = 540000 23:17:10 policy-pap | delivery.timeout.ms = 120000 23:17:10 policy-pap | enable.idempotence = true 23:17:10 policy-pap | interceptor.classes = [] 23:17:10 policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer 23:17:10 policy-pap | linger.ms = 0 23:17:10 policy-pap | max.block.ms = 60000 23:17:10 policy-pap | max.in.flight.requests.per.connection = 5 23:17:10 policy-pap | max.request.size = 1048576 23:17:10 policy-pap | metadata.max.age.ms = 300000 23:17:10 policy-pap | metadata.max.idle.ms = 300000 23:17:10 policy-pap | metric.reporters = [] 23:17:10 policy-pap | metrics.num.samples = 2 23:17:10 kafka | [2024-03-09 23:15:11,818] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NonExistentReplica to NewReplica (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,818] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NonExistentReplica to NewReplica (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,818] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NonExistentReplica to NewReplica (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,818] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NonExistentReplica to NewReplica (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,818] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NonExistentReplica to NewReplica (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,818] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NonExistentReplica to NewReplica (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,818] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NonExistentReplica to NewReplica (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,818] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NonExistentReplica to NewReplica (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,818] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NonExistentReplica to NewReplica (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,818] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NonExistentReplica to NewReplica (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,819] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,957] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,957] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,957] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,957] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,958] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,958] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,958] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,958] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,958] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,958] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,958] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,958] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,959] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,959] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,959] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,959] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,959] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,959] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,959] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,959] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,959] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,960] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,960] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,960] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,960] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,960] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,960] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,960] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,960] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,960] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,961] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,961] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,961] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,961] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,961] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,961] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,961] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,961] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,961] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,962] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,962] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,962] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:10 policy-pap | metrics.recording.level = INFO 23:17:10 policy-pap | metrics.sample.window.ms = 30000 23:17:10 policy-pap | partitioner.adaptive.partitioning.enable = true 23:17:10 policy-pap | partitioner.availability.timeout.ms = 0 23:17:10 policy-pap | partitioner.class = null 23:17:10 policy-pap | partitioner.ignore.keys = false 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicy (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCAPOLICY (name, version)) 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | > upgrade 0700-toscapolicytype.sql 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICYTYPE (name, version)) 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | > upgrade 0710-toscapolicytypes.sql 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICYTYPES (name, version)) 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | > upgrade 0720-toscapolicytypes_toscapolicytype.sql 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytypes_toscapolicytype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPOLICYTYPES_TOSCAPOLICYTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | > upgrade 0730-toscaproperty.sql 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaproperty (DEFAULTVALUE VARCHAR(255) DEFAULT NULL, `DESCRIPTION` VARCHAR(255) DEFAULT NULL, ENTRYSCHEMA LONGBLOB DEFAULT NULL, REQUIRED BOOLEAN DEFAULT 0, STATUS INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PRIMARY KEY PK_TOSCAPROPERTY (parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | > upgrade 0740-toscarelationshiptype.sql 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPE (name, version)) 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | > upgrade 0750-toscarelationshiptypes.sql 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPES (name, version)) 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | > upgrade 0760-toscarelationshiptypes_toscarelationshiptype.sql 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptypes_toscarelationshiptype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPES_TOSCARELATIONSHIPTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | > upgrade 0770-toscarequirement.sql 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirement (CAPABILITY VARCHAR(255) NULL, `DESCRIPTION` VARCHAR(255) NULL, NODE VARCHAR(255) NULL, RELATIONSHIP VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCAREQUIREMENT (name, version)) 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | > upgrade 0780-toscarequirements.sql 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirements (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAREQUIREMENTS (name, version)) 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | > upgrade 0790-toscarequirements_toscarequirement.sql 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirements_toscarequirement (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAREQUIREMENTS_TOSCAREQUIREMENT (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | 23:17:10 policy-pap | receive.buffer.bytes = 32768 23:17:10 policy-pap | reconnect.backoff.max.ms = 1000 23:17:10 policy-pap | reconnect.backoff.ms = 50 23:17:10 policy-pap | request.timeout.ms = 30000 23:17:10 policy-pap | retries = 2147483647 23:17:10 policy-pap | retry.backoff.ms = 100 23:17:10 policy-pap | sasl.client.callback.handler.class = null 23:17:10 policy-pap | sasl.jaas.config = null 23:17:10 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:17:10 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 23:17:10 policy-pap | sasl.kerberos.service.name = null 23:17:10 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 23:17:10 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 23:17:10 policy-pap | sasl.login.callback.handler.class = null 23:17:10 policy-pap | sasl.login.class = null 23:17:10 policy-pap | sasl.login.connect.timeout.ms = null 23:17:10 policy-pap | sasl.login.read.timeout.ms = null 23:17:10 policy-pap | sasl.login.refresh.buffer.seconds = 300 23:17:10 policy-pap | sasl.login.refresh.min.period.seconds = 60 23:17:10 policy-pap | sasl.login.refresh.window.factor = 0.8 23:17:10 policy-pap | sasl.login.refresh.window.jitter = 0.05 23:17:10 policy-pap | sasl.login.retry.backoff.max.ms = 10000 23:17:10 policy-pap | sasl.login.retry.backoff.ms = 100 23:17:10 policy-pap | sasl.mechanism = GSSAPI 23:17:10 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 23:17:10 policy-pap | sasl.oauthbearer.expected.audience = null 23:17:10 policy-pap | sasl.oauthbearer.expected.issuer = null 23:17:10 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:17:10 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:17:10 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:17:10 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 23:17:10 policy-pap | sasl.oauthbearer.scope.claim.name = scope 23:17:10 policy-pap | sasl.oauthbearer.sub.claim.name = sub 23:17:10 policy-pap | sasl.oauthbearer.token.endpoint.url = null 23:17:10 policy-pap | security.protocol = PLAINTEXT 23:17:10 policy-pap | security.providers = null 23:17:10 policy-pap | send.buffer.bytes = 131072 23:17:10 policy-pap | socket.connection.setup.timeout.max.ms = 30000 23:17:10 policy-pap | socket.connection.setup.timeout.ms = 10000 23:17:10 policy-pap | ssl.cipher.suites = null 23:17:10 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:17:10 policy-pap | ssl.endpoint.identification.algorithm = https 23:17:10 policy-pap | ssl.engine.factory.class = null 23:17:10 policy-pap | ssl.key.password = null 23:17:10 policy-pap | ssl.keymanager.algorithm = SunX509 23:17:10 policy-db-migrator | > upgrade 0800-toscaservicetemplate.sql 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaservicetemplate (`DESCRIPTION` VARCHAR(255) NULL, TOSCADEFINITIONSVERSION VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, capabilityTypesVersion VARCHAR(20) NULL, capabilityTypesName VARCHAR(120) NULL, dataTypesName VARCHAR(120) NULL, dataTypesVersion VARCHAR(20) NULL, nodeTypesVersion VARCHAR(20) NULL, nodeTypesName VARCHAR(120) NULL, policyTypesName VARCHAR(120) NULL, policyTypesVersion VARCHAR(20) NULL, relationshipTypesVersion VARCHAR(20) NULL, relationshipTypesName VARCHAR(120) NULL, topologyTemplateLocalName VARCHAR(120) NULL, topologyTemplateParentKeyName VARCHAR(120) NULL, topologyTemplateParentKeyVersion VARCHAR(15) NULL, topologyTemplateParentLocalName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCASERVICETEMPLATE (name, version)) 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | > upgrade 0810-toscatopologytemplate.sql 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscatopologytemplate (`description` VARCHAR(255) NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, nodeTemplatessVersion VARCHAR(20) NULL, nodeTemplatesName VARCHAR(120) NULL, policyVersion VARCHAR(20) NULL, policyName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCATOPOLOGYTEMPLATE (parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | > upgrade 0820-toscatrigger.sql 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscatrigger (ACTION VARCHAR(255) NULL, toscaCondition LONGBLOB DEFAULT NULL, toscaConstraint LONGBLOB DEFAULT NULL, `DESCRIPTION` VARCHAR(255) NULL, EVALUATIONS INT DEFAULT NULL, EVENTTYPE VARCHAR(255) NULL, METHOD VARCHAR(255) NULL, `PERIOD` LONGBLOB DEFAULT NULL, SCHEDULE LONGBLOB DEFAULT NULL, TARGETFILTER LONGBLOB DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_TOSCATRIGGER (parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | > upgrade 0830-FK_ToscaNodeTemplate_capabilitiesName.sql 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | CREATE INDEX FK_ToscaNodeTemplate_capabilitiesName ON toscanodetemplate(capabilitiesName, capabilitiesVersion) 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | > upgrade 0840-FK_ToscaNodeTemplate_requirementsName.sql 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | CREATE INDEX FK_ToscaNodeTemplate_requirementsName ON toscanodetemplate(requirementsName, requirementsVersion) 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | > upgrade 0850-FK_ToscaNodeType_requirementsName.sql 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | CREATE INDEX FK_ToscaNodeType_requirementsName ON toscanodetype(requirementsName, requirementsVersion) 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | > upgrade 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_capabilityTypesName ON toscaservicetemplate(capabilityTypesName, capabilityTypesVersion) 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | > upgrade 0870-FK_ToscaServiceTemplate_dataTypesName.sql 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_dataTypesName ON toscaservicetemplate(dataTypesName, dataTypesVersion) 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | > upgrade 0880-FK_ToscaServiceTemplate_nodeTypesName.sql 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_nodeTypesName ON toscaservicetemplate(nodeTypesName, nodeTypesVersion) 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | > upgrade 0890-FK_ToscaServiceTemplate_policyTypesName.sql 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_policyTypesName ON toscaservicetemplate(policyTypesName, policyTypesVersion) 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | > upgrade 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_relationshipTypesName ON toscaservicetemplate(relationshipTypesName, relationshipTypesVersion) 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | > upgrade 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | CREATE INDEX FK_ToscaTopologyTemplate_nodeTemplatesName ON toscatopologytemplate(nodeTemplatesName, nodeTemplatessVersion) 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | > upgrade 0920-FK_ToscaTopologyTemplate_policyName.sql 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-pap | ssl.keystore.certificate.chain = null 23:17:10 policy-pap | ssl.keystore.key = null 23:17:10 policy-pap | ssl.keystore.location = null 23:17:10 policy-pap | ssl.keystore.password = null 23:17:10 policy-pap | ssl.keystore.type = JKS 23:17:10 policy-pap | ssl.protocol = TLSv1.3 23:17:10 policy-pap | ssl.provider = null 23:17:10 policy-pap | ssl.secure.random.implementation = null 23:17:10 policy-pap | ssl.trustmanager.algorithm = PKIX 23:17:10 policy-pap | ssl.truststore.certificates = null 23:17:10 policy-pap | ssl.truststore.location = null 23:17:10 policy-pap | ssl.truststore.password = null 23:17:10 policy-pap | ssl.truststore.type = JKS 23:17:10 policy-pap | transaction.timeout.ms = 60000 23:17:10 policy-pap | transactional.id = null 23:17:10 policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer 23:17:10 policy-pap | 23:17:10 policy-pap | [2024-03-09T23:15:11.254+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-2] Instantiated an idempotent producer. 23:17:10 policy-pap | [2024-03-09T23:15:11.256+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 23:17:10 policy-pap | [2024-03-09T23:15:11.256+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 23:17:10 policy-pap | [2024-03-09T23:15:11.256+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1710026111256 23:17:10 policy-pap | [2024-03-09T23:15:11.257+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=4876646e-1d8e-444e-9de6-a52f477d6518, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created 23:17:10 policy-pap | [2024-03-09T23:15:11.257+00:00|INFO|ServiceManager|main] Policy PAP starting PAP Activator 23:17:10 policy-pap | [2024-03-09T23:15:11.257+00:00|INFO|ServiceManager|main] Policy PAP starting PDP publisher 23:17:10 policy-pap | [2024-03-09T23:15:11.259+00:00|INFO|ServiceManager|main] Policy PAP starting Policy Notification publisher 23:17:10 policy-pap | [2024-03-09T23:15:11.260+00:00|INFO|ServiceManager|main] Policy PAP starting PDP update timers 23:17:10 policy-pap | [2024-03-09T23:15:11.264+00:00|INFO|ServiceManager|main] Policy PAP starting PDP state-change timers 23:17:10 policy-pap | [2024-03-09T23:15:11.266+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification lock 23:17:10 policy-pap | [2024-03-09T23:15:11.266+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification requests 23:17:10 policy-pap | [2024-03-09T23:15:11.264+00:00|INFO|TimerManager|Thread-9] timer manager update started 23:17:10 policy-pap | [2024-03-09T23:15:11.267+00:00|INFO|TimerManager|Thread-10] timer manager state-change started 23:17:10 policy-pap | [2024-03-09T23:15:11.268+00:00|INFO|ServiceManager|main] Policy PAP starting PDP expiration timer 23:17:10 policy-pap | [2024-03-09T23:15:11.269+00:00|INFO|ServiceManager|main] Policy PAP started 23:17:10 policy-pap | [2024-03-09T23:15:11.271+00:00|INFO|PolicyPapApplication|main] Started PolicyPapApplication in 11.449 seconds (process running for 12.129) 23:17:10 policy-pap | [2024-03-09T23:15:11.718+00:00|INFO|Metadata|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] Cluster ID: GSgJsqhRTlKOoxzH83EoHQ 23:17:10 policy-pap | [2024-03-09T23:15:11.720+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 2 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} 23:17:10 policy-pap | [2024-03-09T23:15:11.720+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: GSgJsqhRTlKOoxzH83EoHQ 23:17:10 policy-pap | [2024-03-09T23:15:11.721+00:00|INFO|Metadata|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Cluster ID: GSgJsqhRTlKOoxzH83EoHQ 23:17:10 policy-pap | [2024-03-09T23:15:11.801+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-99d6a240-42a3-48b7-904b-df55de280eab-3, groupId=99d6a240-42a3-48b7-904b-df55de280eab] Error while fetching metadata with correlation id 2 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:17:10 policy-pap | [2024-03-09T23:15:11.801+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-99d6a240-42a3-48b7-904b-df55de280eab-3, groupId=99d6a240-42a3-48b7-904b-df55de280eab] Cluster ID: GSgJsqhRTlKOoxzH83EoHQ 23:17:10 policy-pap | [2024-03-09T23:15:11.826+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 4 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:17:10 policy-pap | [2024-03-09T23:15:11.845+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] ProducerId set to 0 with epoch 0 23:17:10 policy-pap | [2024-03-09T23:15:11.846+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 1 with epoch 0 23:17:10 policy-pap | [2024-03-09T23:15:11.929+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-99d6a240-42a3-48b7-904b-df55de280eab-3, groupId=99d6a240-42a3-48b7-904b-df55de280eab] Error while fetching metadata with correlation id 4 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:17:10 policy-pap | [2024-03-09T23:15:11.943+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 6 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:17:10 policy-pap | [2024-03-09T23:15:12.038+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-99d6a240-42a3-48b7-904b-df55de280eab-3, groupId=99d6a240-42a3-48b7-904b-df55de280eab] Error while fetching metadata with correlation id 6 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:17:10 policy-pap | [2024-03-09T23:15:12.049+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 8 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:17:10 kafka | [2024-03-09 23:15:11,962] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,962] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,962] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,962] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,962] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,962] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,963] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,963] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,963] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,965] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-13 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,965] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-46 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,965] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-9 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,966] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-42 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,966] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-21 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,966] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-17 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,966] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-30 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,966] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-26 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,966] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-5 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,966] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-38 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,966] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-1 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,966] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-34 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,966] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-16 (state.change.logger) 23:17:10 policy-pap | [2024-03-09T23:15:12.152+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-99d6a240-42a3-48b7-904b-df55de280eab-3, groupId=99d6a240-42a3-48b7-904b-df55de280eab] Error while fetching metadata with correlation id 8 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:17:10 policy-pap | [2024-03-09T23:15:12.160+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 10 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:17:10 policy-pap | [2024-03-09T23:15:12.259+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-99d6a240-42a3-48b7-904b-df55de280eab-3, groupId=99d6a240-42a3-48b7-904b-df55de280eab] Error while fetching metadata with correlation id 10 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:17:10 policy-pap | [2024-03-09T23:15:12.270+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 12 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:17:10 policy-pap | [2024-03-09T23:15:12.372+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-99d6a240-42a3-48b7-904b-df55de280eab-3, groupId=99d6a240-42a3-48b7-904b-df55de280eab] Error while fetching metadata with correlation id 12 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:17:10 policy-pap | [2024-03-09T23:15:12.376+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 14 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:17:10 policy-pap | [2024-03-09T23:15:12.481+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 16 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} 23:17:10 policy-pap | [2024-03-09T23:15:12.484+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-99d6a240-42a3-48b7-904b-df55de280eab-3, groupId=99d6a240-42a3-48b7-904b-df55de280eab] Error while fetching metadata with correlation id 14 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:17:10 policy-pap | [2024-03-09T23:15:12.585+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-99d6a240-42a3-48b7-904b-df55de280eab-3, groupId=99d6a240-42a3-48b7-904b-df55de280eab] Error while fetching metadata with correlation id 16 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} 23:17:10 policy-pap | [2024-03-09T23:15:12.590+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 18 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:17:10 policy-pap | [2024-03-09T23:15:12.699+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-99d6a240-42a3-48b7-904b-df55de280eab-3, groupId=99d6a240-42a3-48b7-904b-df55de280eab] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) 23:17:10 policy-pap | [2024-03-09T23:15:12.703+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) 23:17:10 policy-pap | [2024-03-09T23:15:12.712+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-99d6a240-42a3-48b7-904b-df55de280eab-3, groupId=99d6a240-42a3-48b7-904b-df55de280eab] (Re-)joining group 23:17:10 policy-pap | [2024-03-09T23:15:12.713+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group 23:17:10 policy-pap | [2024-03-09T23:15:12.769+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-99d6a240-42a3-48b7-904b-df55de280eab-3, groupId=99d6a240-42a3-48b7-904b-df55de280eab] Request joining group due to: need to re-join with the given member-id: consumer-99d6a240-42a3-48b7-904b-df55de280eab-3-a4e50710-7734-4868-9a53-8dcca5787c23 23:17:10 policy-pap | [2024-03-09T23:15:12.769+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-99d6a240-42a3-48b7-904b-df55de280eab-3, groupId=99d6a240-42a3-48b7-904b-df55de280eab] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) 23:17:10 policy-pap | [2024-03-09T23:15:12.769+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-99d6a240-42a3-48b7-904b-df55de280eab-3, groupId=99d6a240-42a3-48b7-904b-df55de280eab] (Re-)joining group 23:17:10 policy-pap | [2024-03-09T23:15:12.776+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: need to re-join with the given member-id: consumer-policy-pap-4-e0fe26bd-6bcb-4009-9bd7-a5712ddcf51d 23:17:10 policy-pap | [2024-03-09T23:15:12.776+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) 23:17:10 policy-pap | [2024-03-09T23:15:12.776+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group 23:17:10 policy-pap | [2024-03-09T23:15:15.804+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-99d6a240-42a3-48b7-904b-df55de280eab-3, groupId=99d6a240-42a3-48b7-904b-df55de280eab] Successfully joined group with generation Generation{generationId=1, memberId='consumer-99d6a240-42a3-48b7-904b-df55de280eab-3-a4e50710-7734-4868-9a53-8dcca5787c23', protocol='range'} 23:17:10 policy-pap | [2024-03-09T23:15:15.807+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully joined group with generation Generation{generationId=1, memberId='consumer-policy-pap-4-e0fe26bd-6bcb-4009-9bd7-a5712ddcf51d', protocol='range'} 23:17:10 policy-pap | [2024-03-09T23:15:15.816+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-99d6a240-42a3-48b7-904b-df55de280eab-3, groupId=99d6a240-42a3-48b7-904b-df55de280eab] Finished assignment for group at generation 1: {consumer-99d6a240-42a3-48b7-904b-df55de280eab-3-a4e50710-7734-4868-9a53-8dcca5787c23=Assignment(partitions=[policy-pdp-pap-0])} 23:17:10 policy-pap | [2024-03-09T23:15:15.815+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Finished assignment for group at generation 1: {consumer-policy-pap-4-e0fe26bd-6bcb-4009-9bd7-a5712ddcf51d=Assignment(partitions=[policy-pdp-pap-0])} 23:17:10 policy-pap | [2024-03-09T23:15:15.849+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-99d6a240-42a3-48b7-904b-df55de280eab-3, groupId=99d6a240-42a3-48b7-904b-df55de280eab] Successfully synced group in generation Generation{generationId=1, memberId='consumer-99d6a240-42a3-48b7-904b-df55de280eab-3-a4e50710-7734-4868-9a53-8dcca5787c23', protocol='range'} 23:17:10 policy-pap | [2024-03-09T23:15:15.850+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-99d6a240-42a3-48b7-904b-df55de280eab-3, groupId=99d6a240-42a3-48b7-904b-df55de280eab] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) 23:17:10 policy-pap | [2024-03-09T23:15:15.853+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-99d6a240-42a3-48b7-904b-df55de280eab-3, groupId=99d6a240-42a3-48b7-904b-df55de280eab] Adding newly assigned partitions: policy-pdp-pap-0 23:17:10 policy-pap | [2024-03-09T23:15:15.854+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully synced group in generation Generation{generationId=1, memberId='consumer-policy-pap-4-e0fe26bd-6bcb-4009-9bd7-a5712ddcf51d', protocol='range'} 23:17:10 policy-pap | [2024-03-09T23:15:15.855+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) 23:17:10 policy-pap | [2024-03-09T23:15:15.855+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Adding newly assigned partitions: policy-pdp-pap-0 23:17:10 policy-pap | [2024-03-09T23:15:15.877+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Found no committed offset for partition policy-pdp-pap-0 23:17:10 policy-pap | [2024-03-09T23:15:15.877+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-99d6a240-42a3-48b7-904b-df55de280eab-3, groupId=99d6a240-42a3-48b7-904b-df55de280eab] Found no committed offset for partition policy-pdp-pap-0 23:17:10 policy-pap | [2024-03-09T23:15:15.902+00:00|INFO|SubscriptionState|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. 23:17:10 policy-pap | [2024-03-09T23:15:15.904+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-99d6a240-42a3-48b7-904b-df55de280eab-3, groupId=99d6a240-42a3-48b7-904b-df55de280eab] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. 23:17:10 policy-pap | [2024-03-09T23:15:19.023+00:00|INFO|[/policy/pap/v1]|http-nio-6969-exec-3] Initializing Spring DispatcherServlet 'dispatcherServlet' 23:17:10 policy-pap | [2024-03-09T23:15:19.024+00:00|INFO|DispatcherServlet|http-nio-6969-exec-3] Initializing Servlet 'dispatcherServlet' 23:17:10 policy-pap | [2024-03-09T23:15:19.025+00:00|INFO|DispatcherServlet|http-nio-6969-exec-3] Completed initialization in 1 ms 23:17:10 policy-pap | [2024-03-09T23:15:33.300+00:00|INFO|OrderedServiceImpl|KAFKA-source-policy-heartbeat] ***** OrderedServiceImpl implementers: 23:17:10 policy-pap | [] 23:17:10 policy-pap | [2024-03-09T23:15:33.300+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:17:10 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"51fa2b77-ded6-4b99-a2e0-cf0de6105024","timestampMs":1710026133252,"name":"apex-a4761c86-ccce-426b-830a-adbddd25197d","pdpGroup":"defaultGroup"} 23:17:10 policy-pap | [2024-03-09T23:15:33.304+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:17:10 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"51fa2b77-ded6-4b99-a2e0-cf0de6105024","timestampMs":1710026133252,"name":"apex-a4761c86-ccce-426b-830a-adbddd25197d","pdpGroup":"defaultGroup"} 23:17:10 policy-pap | [2024-03-09T23:15:33.312+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus 23:17:10 policy-pap | [2024-03-09T23:15:33.401+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-a4761c86-ccce-426b-830a-adbddd25197d PdpUpdate starting 23:17:10 policy-pap | [2024-03-09T23:15:33.402+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-a4761c86-ccce-426b-830a-adbddd25197d PdpUpdate starting listener 23:17:10 policy-pap | [2024-03-09T23:15:33.402+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-a4761c86-ccce-426b-830a-adbddd25197d PdpUpdate starting timer 23:17:10 policy-pap | [2024-03-09T23:15:33.403+00:00|INFO|TimerManager|KAFKA-source-policy-heartbeat] update timer registered Timer [name=677fe6da-3e09-4ff6-a3fe-8f782868df46, expireMs=1710026163403] 23:17:10 policy-pap | [2024-03-09T23:15:33.404+00:00|INFO|TimerManager|Thread-9] update timer waiting 29999ms Timer [name=677fe6da-3e09-4ff6-a3fe-8f782868df46, expireMs=1710026163403] 23:17:10 policy-pap | [2024-03-09T23:15:33.404+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-a4761c86-ccce-426b-830a-adbddd25197d PdpUpdate starting enqueue 23:17:10 policy-pap | [2024-03-09T23:15:33.405+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-a4761c86-ccce-426b-830a-adbddd25197d PdpUpdate started 23:17:10 policy-pap | [2024-03-09T23:15:33.408+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 23:17:10 policy-pap | {"source":"pap-984d5b06-e23e-4e42-abee-2db8b09e6a7c","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"677fe6da-3e09-4ff6-a3fe-8f782868df46","timestampMs":1710026133383,"name":"apex-a4761c86-ccce-426b-830a-adbddd25197d","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:17:10 policy-pap | [2024-03-09T23:15:33.451+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:17:10 policy-pap | {"source":"pap-984d5b06-e23e-4e42-abee-2db8b09e6a7c","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"677fe6da-3e09-4ff6-a3fe-8f782868df46","timestampMs":1710026133383,"name":"apex-a4761c86-ccce-426b-830a-adbddd25197d","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:17:10 policy-pap | [2024-03-09T23:15:33.451+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE 23:17:10 policy-pap | [2024-03-09T23:15:33.455+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:17:10 policy-pap | {"source":"pap-984d5b06-e23e-4e42-abee-2db8b09e6a7c","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"677fe6da-3e09-4ff6-a3fe-8f782868df46","timestampMs":1710026133383,"name":"apex-a4761c86-ccce-426b-830a-adbddd25197d","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:17:10 policy-pap | [2024-03-09T23:15:33.455+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE 23:17:10 policy-pap | [2024-03-09T23:15:33.485+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:17:10 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"79c95b38-e2b1-49fc-a4ed-e70662c5a462","timestampMs":1710026133470,"name":"apex-a4761c86-ccce-426b-830a-adbddd25197d","pdpGroup":"defaultGroup"} 23:17:10 policy-pap | [2024-03-09T23:15:33.485+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus 23:17:10 policy-pap | [2024-03-09T23:15:33.486+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:17:10 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"79c95b38-e2b1-49fc-a4ed-e70662c5a462","timestampMs":1710026133470,"name":"apex-a4761c86-ccce-426b-830a-adbddd25197d","pdpGroup":"defaultGroup"} 23:17:10 policy-pap | [2024-03-09T23:15:33.492+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:17:10 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"677fe6da-3e09-4ff6-a3fe-8f782868df46","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"f21395d0-17b2-48db-bc99-adf2c0e047ae","timestampMs":1710026133471,"name":"apex-a4761c86-ccce-426b-830a-adbddd25197d","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:17:10 policy-pap | [2024-03-09T23:15:33.514+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-a4761c86-ccce-426b-830a-adbddd25197d PdpUpdate stopping 23:17:10 policy-pap | [2024-03-09T23:15:33.515+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-a4761c86-ccce-426b-830a-adbddd25197d PdpUpdate stopping enqueue 23:17:10 policy-pap | [2024-03-09T23:15:33.515+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-a4761c86-ccce-426b-830a-adbddd25197d PdpUpdate stopping timer 23:17:10 kafka | [2024-03-09 23:15:11,967] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-45 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,967] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-12 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,967] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-41 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,967] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-24 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,967] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-20 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,967] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-49 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,967] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-0 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,967] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-29 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,967] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-25 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,967] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-8 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,968] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-37 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,968] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-4 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,968] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-33 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,968] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-15 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,968] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-48 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,968] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-11 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,968] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-44 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,968] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-23 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,968] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-19 (state.change.logger) 23:17:10 policy-db-migrator | CREATE INDEX FK_ToscaTopologyTemplate_policyName ON toscatopologytemplate(policyName, policyVersion) 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | > upgrade 0940-PdpPolicyStatus_PdpGroup.sql 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | CREATE INDEX PdpPolicyStatus_PdpGroup ON pdppolicystatus(PDPGROUP) 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | > upgrade 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | CREATE INDEX TscaServiceTemplatetopologyTemplateParentLocalName ON toscaservicetemplate(topologyTemplateParentLocalName, topologyTemplateLocalName, topologyTemplateParentKeyVersion, topologyTemplateParentKeyName) 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | > upgrade 0960-FK_ToscaNodeTemplate_capabilitiesName.sql 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | ALTER TABLE toscanodetemplate ADD CONSTRAINT FK_ToscaNodeTemplate_capabilitiesName FOREIGN KEY (capabilitiesName, capabilitiesVersion) REFERENCES toscacapabilityassignments (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | > upgrade 0970-FK_ToscaNodeTemplate_requirementsName.sql 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | ALTER TABLE toscanodetemplate ADD CONSTRAINT FK_ToscaNodeTemplate_requirementsName FOREIGN KEY (requirementsName, requirementsVersion) REFERENCES toscarequirements (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | > upgrade 0980-FK_ToscaNodeType_requirementsName.sql 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | ALTER TABLE toscanodetype ADD CONSTRAINT FK_ToscaNodeType_requirementsName FOREIGN KEY (requirementsName, requirementsVersion) REFERENCES toscarequirements (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | > upgrade 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_capabilityTypesName FOREIGN KEY (capabilityTypesName, capabilityTypesVersion) REFERENCES toscacapabilitytypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | > upgrade 1000-FK_ToscaServiceTemplate_dataTypesName.sql 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_dataTypesName FOREIGN KEY (dataTypesName, dataTypesVersion) REFERENCES toscadatatypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | > upgrade 1010-FK_ToscaServiceTemplate_nodeTypesName.sql 23:17:10 kafka | [2024-03-09 23:15:11,968] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-32 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,968] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-28 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,968] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-7 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,969] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-40 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,969] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-3 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,969] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-36 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,969] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-47 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,969] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-14 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,969] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-43 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,969] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-10 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,969] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-22 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,969] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-18 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,969] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-31 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,969] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-27 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,970] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-39 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,970] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-6 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,970] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-35 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,970] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition policy-pdp-pap-0 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,970] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-2 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,971] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 51 become-leader and 0 become-follower partitions (state.change.logger) 23:17:10 policy-pap | [2024-03-09T23:15:33.515+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=677fe6da-3e09-4ff6-a3fe-8f782868df46, expireMs=1710026163403] 23:17:10 policy-pap | [2024-03-09T23:15:33.515+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-a4761c86-ccce-426b-830a-adbddd25197d PdpUpdate stopping listener 23:17:10 policy-pap | [2024-03-09T23:15:33.515+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-a4761c86-ccce-426b-830a-adbddd25197d PdpUpdate stopped 23:17:10 policy-pap | [2024-03-09T23:15:33.519+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:17:10 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"677fe6da-3e09-4ff6-a3fe-8f782868df46","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"f21395d0-17b2-48db-bc99-adf2c0e047ae","timestampMs":1710026133471,"name":"apex-a4761c86-ccce-426b-830a-adbddd25197d","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:17:10 policy-pap | [2024-03-09T23:15:33.520+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 677fe6da-3e09-4ff6-a3fe-8f782868df46 23:17:10 policy-pap | [2024-03-09T23:15:33.524+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-a4761c86-ccce-426b-830a-adbddd25197d PdpUpdate successful 23:17:10 policy-pap | [2024-03-09T23:15:33.524+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-a4761c86-ccce-426b-830a-adbddd25197d start publishing next request 23:17:10 policy-pap | [2024-03-09T23:15:33.524+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-a4761c86-ccce-426b-830a-adbddd25197d PdpStateChange starting 23:17:10 policy-pap | [2024-03-09T23:15:33.525+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-a4761c86-ccce-426b-830a-adbddd25197d PdpStateChange starting listener 23:17:10 policy-pap | [2024-03-09T23:15:33.525+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-a4761c86-ccce-426b-830a-adbddd25197d PdpStateChange starting timer 23:17:10 policy-pap | [2024-03-09T23:15:33.525+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer registered Timer [name=b376ce6a-be29-4c6f-9177-d857e98b4d69, expireMs=1710026163525] 23:17:10 policy-pap | [2024-03-09T23:15:33.525+00:00|INFO|TimerManager|Thread-10] state-change timer waiting 30000ms Timer [name=b376ce6a-be29-4c6f-9177-d857e98b4d69, expireMs=1710026163525] 23:17:10 policy-pap | [2024-03-09T23:15:33.525+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-a4761c86-ccce-426b-830a-adbddd25197d PdpStateChange starting enqueue 23:17:10 policy-pap | [2024-03-09T23:15:33.525+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-a4761c86-ccce-426b-830a-adbddd25197d PdpStateChange started 23:17:10 policy-pap | [2024-03-09T23:15:33.525+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 23:17:10 policy-pap | {"source":"pap-984d5b06-e23e-4e42-abee-2db8b09e6a7c","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"b376ce6a-be29-4c6f-9177-d857e98b4d69","timestampMs":1710026133384,"name":"apex-a4761c86-ccce-426b-830a-adbddd25197d","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:17:10 policy-pap | [2024-03-09T23:15:33.542+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:17:10 policy-pap | {"source":"pap-984d5b06-e23e-4e42-abee-2db8b09e6a7c","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"b376ce6a-be29-4c6f-9177-d857e98b4d69","timestampMs":1710026133384,"name":"apex-a4761c86-ccce-426b-830a-adbddd25197d","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:17:10 policy-pap | [2024-03-09T23:15:33.543+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_STATE_CHANGE 23:17:10 policy-pap | [2024-03-09T23:15:33.551+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:17:10 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"b376ce6a-be29-4c6f-9177-d857e98b4d69","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"ef94ea69-9cd9-4d2a-942c-2a0e2f2bf9e5","timestampMs":1710026133539,"name":"apex-a4761c86-ccce-426b-830a-adbddd25197d","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:17:10 policy-pap | [2024-03-09T23:15:33.552+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id b376ce6a-be29-4c6f-9177-d857e98b4d69 23:17:10 policy-pap | [2024-03-09T23:15:33.640+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:17:10 policy-pap | {"source":"pap-984d5b06-e23e-4e42-abee-2db8b09e6a7c","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"b376ce6a-be29-4c6f-9177-d857e98b4d69","timestampMs":1710026133384,"name":"apex-a4761c86-ccce-426b-830a-adbddd25197d","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:17:10 policy-pap | [2024-03-09T23:15:33.640+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATE_CHANGE 23:17:10 policy-pap | [2024-03-09T23:15:33.642+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:17:10 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"b376ce6a-be29-4c6f-9177-d857e98b4d69","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"ef94ea69-9cd9-4d2a-942c-2a0e2f2bf9e5","timestampMs":1710026133539,"name":"apex-a4761c86-ccce-426b-830a-adbddd25197d","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:17:10 policy-pap | [2024-03-09T23:15:33.643+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-a4761c86-ccce-426b-830a-adbddd25197d PdpStateChange stopping 23:17:10 policy-pap | [2024-03-09T23:15:33.643+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-a4761c86-ccce-426b-830a-adbddd25197d PdpStateChange stopping enqueue 23:17:10 policy-pap | [2024-03-09T23:15:33.643+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-a4761c86-ccce-426b-830a-adbddd25197d PdpStateChange stopping timer 23:17:10 policy-pap | [2024-03-09T23:15:33.643+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer cancelled Timer [name=b376ce6a-be29-4c6f-9177-d857e98b4d69, expireMs=1710026163525] 23:17:10 policy-pap | [2024-03-09T23:15:33.643+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-a4761c86-ccce-426b-830a-adbddd25197d PdpStateChange stopping listener 23:17:10 policy-pap | [2024-03-09T23:15:33.643+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-a4761c86-ccce-426b-830a-adbddd25197d PdpStateChange stopped 23:17:10 policy-pap | [2024-03-09T23:15:33.643+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-a4761c86-ccce-426b-830a-adbddd25197d PdpStateChange successful 23:17:10 policy-pap | [2024-03-09T23:15:33.643+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-a4761c86-ccce-426b-830a-adbddd25197d start publishing next request 23:17:10 policy-pap | [2024-03-09T23:15:33.643+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-a4761c86-ccce-426b-830a-adbddd25197d PdpUpdate starting 23:17:10 policy-pap | [2024-03-09T23:15:33.643+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-a4761c86-ccce-426b-830a-adbddd25197d PdpUpdate starting listener 23:17:10 policy-pap | [2024-03-09T23:15:33.643+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-a4761c86-ccce-426b-830a-adbddd25197d PdpUpdate starting timer 23:17:10 policy-pap | [2024-03-09T23:15:33.643+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer registered Timer [name=9dd920d4-481e-4f3b-96de-86d19b111a74, expireMs=1710026163643] 23:17:10 policy-pap | [2024-03-09T23:15:33.643+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-a4761c86-ccce-426b-830a-adbddd25197d PdpUpdate starting enqueue 23:17:10 policy-pap | [2024-03-09T23:15:33.643+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-a4761c86-ccce-426b-830a-adbddd25197d PdpUpdate started 23:17:10 policy-pap | [2024-03-09T23:15:33.644+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 23:17:10 policy-pap | {"source":"pap-984d5b06-e23e-4e42-abee-2db8b09e6a7c","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"9dd920d4-481e-4f3b-96de-86d19b111a74","timestampMs":1710026133629,"name":"apex-a4761c86-ccce-426b-830a-adbddd25197d","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:17:10 policy-pap | [2024-03-09T23:15:33.653+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:17:10 policy-pap | {"source":"pap-984d5b06-e23e-4e42-abee-2db8b09e6a7c","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"9dd920d4-481e-4f3b-96de-86d19b111a74","timestampMs":1710026133629,"name":"apex-a4761c86-ccce-426b-830a-adbddd25197d","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:17:10 policy-pap | [2024-03-09T23:15:33.654+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE 23:17:10 policy-pap | [2024-03-09T23:15:33.657+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:17:10 policy-pap | {"source":"pap-984d5b06-e23e-4e42-abee-2db8b09e6a7c","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"9dd920d4-481e-4f3b-96de-86d19b111a74","timestampMs":1710026133629,"name":"apex-a4761c86-ccce-426b-830a-adbddd25197d","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:17:10 policy-pap | [2024-03-09T23:15:33.657+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE 23:17:10 policy-pap | [2024-03-09T23:15:33.667+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_nodeTypesName FOREIGN KEY (nodeTypesName, nodeTypesVersion) REFERENCES toscanodetypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:17:10 kafka | [2024-03-09 23:15:11,974] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 51 partitions (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,975] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NewReplica to OnlineReplica (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,975] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NewReplica to OnlineReplica (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,976] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NewReplica to OnlineReplica (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,976] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NewReplica to OnlineReplica (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,976] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NewReplica to OnlineReplica (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,976] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NewReplica to OnlineReplica (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,976] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NewReplica to OnlineReplica (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,976] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NewReplica to OnlineReplica (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,976] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NewReplica to OnlineReplica (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,976] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NewReplica to OnlineReplica (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,976] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NewReplica to OnlineReplica (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,976] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NewReplica to OnlineReplica (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,976] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NewReplica to OnlineReplica (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,976] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NewReplica to OnlineReplica (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,976] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NewReplica to OnlineReplica (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,977] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NewReplica to OnlineReplica (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,977] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NewReplica to OnlineReplica (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,977] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NewReplica to OnlineReplica (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,977] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NewReplica to OnlineReplica (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,977] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NewReplica to OnlineReplica (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,977] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NewReplica to OnlineReplica (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,977] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NewReplica to OnlineReplica (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,977] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NewReplica to OnlineReplica (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,977] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NewReplica to OnlineReplica (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,977] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NewReplica to OnlineReplica (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,977] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NewReplica to OnlineReplica (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,977] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NewReplica to OnlineReplica (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,977] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NewReplica to OnlineReplica (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,977] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NewReplica to OnlineReplica (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,978] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NewReplica to OnlineReplica (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,978] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NewReplica to OnlineReplica (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,978] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NewReplica to OnlineReplica (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,978] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NewReplica to OnlineReplica (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,978] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NewReplica to OnlineReplica (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,978] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NewReplica to OnlineReplica (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,978] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NewReplica to OnlineReplica (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,978] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NewReplica to OnlineReplica (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,978] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NewReplica to OnlineReplica (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,978] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NewReplica to OnlineReplica (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,978] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NewReplica to OnlineReplica (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,978] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NewReplica to OnlineReplica (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,978] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NewReplica to OnlineReplica (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,978] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NewReplica to OnlineReplica (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,979] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NewReplica to OnlineReplica (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,979] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NewReplica to OnlineReplica (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,979] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NewReplica to OnlineReplica (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,979] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NewReplica to OnlineReplica (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,979] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NewReplica to OnlineReplica (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,979] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NewReplica to OnlineReplica (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,979] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NewReplica to OnlineReplica (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,979] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NewReplica to OnlineReplica (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,979] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,982] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 for 51 partitions (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,984] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,984] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,984] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,984] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,984] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,984] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,985] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,985] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,985] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,985] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,985] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,985] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,986] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,986] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,986] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,986] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:17:10 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"9dd920d4-481e-4f3b-96de-86d19b111a74","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"35213e2b-732e-402c-8250-b38ab5f418f5","timestampMs":1710026133656,"name":"apex-a4761c86-ccce-426b-830a-adbddd25197d","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:17:10 policy-pap | [2024-03-09T23:15:33.668+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-a4761c86-ccce-426b-830a-adbddd25197d PdpUpdate stopping 23:17:10 policy-pap | [2024-03-09T23:15:33.668+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-a4761c86-ccce-426b-830a-adbddd25197d PdpUpdate stopping enqueue 23:17:10 policy-pap | [2024-03-09T23:15:33.668+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-a4761c86-ccce-426b-830a-adbddd25197d PdpUpdate stopping timer 23:17:10 policy-pap | [2024-03-09T23:15:33.668+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=9dd920d4-481e-4f3b-96de-86d19b111a74, expireMs=1710026163643] 23:17:10 policy-pap | [2024-03-09T23:15:33.668+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-a4761c86-ccce-426b-830a-adbddd25197d PdpUpdate stopping listener 23:17:10 policy-pap | [2024-03-09T23:15:33.668+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-a4761c86-ccce-426b-830a-adbddd25197d PdpUpdate stopped 23:17:10 policy-pap | [2024-03-09T23:15:33.670+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:17:10 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"9dd920d4-481e-4f3b-96de-86d19b111a74","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"35213e2b-732e-402c-8250-b38ab5f418f5","timestampMs":1710026133656,"name":"apex-a4761c86-ccce-426b-830a-adbddd25197d","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:17:10 policy-pap | [2024-03-09T23:15:33.671+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 9dd920d4-481e-4f3b-96de-86d19b111a74 23:17:10 policy-pap | [2024-03-09T23:15:33.678+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-a4761c86-ccce-426b-830a-adbddd25197d PdpUpdate successful 23:17:10 policy-pap | [2024-03-09T23:15:33.678+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-a4761c86-ccce-426b-830a-adbddd25197d has no more requests 23:17:10 policy-pap | [2024-03-09T23:15:39.676+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls 23:17:10 policy-pap | [2024-03-09T23:15:39.685+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls 23:17:10 policy-pap | [2024-03-09T23:15:40.157+00:00|INFO|SessionData|http-nio-6969-exec-6] unknown group testGroup 23:17:10 policy-pap | [2024-03-09T23:15:40.780+00:00|INFO|SessionData|http-nio-6969-exec-6] create cached group testGroup 23:17:10 policy-pap | [2024-03-09T23:15:40.781+00:00|INFO|SessionData|http-nio-6969-exec-6] creating DB group testGroup 23:17:10 policy-pap | [2024-03-09T23:15:41.369+00:00|INFO|SessionData|http-nio-6969-exec-10] cache group testGroup 23:17:10 policy-pap | [2024-03-09T23:15:41.621+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-10] Registering a deploy for policy onap.restart.tca 1.0.0 23:17:10 policy-pap | [2024-03-09T23:15:41.765+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-10] Registering a deploy for policy operational.apex.decisionMaker 1.0.0 23:17:10 policy-pap | [2024-03-09T23:15:41.765+00:00|INFO|SessionData|http-nio-6969-exec-10] update cached group testGroup 23:17:10 policy-pap | [2024-03-09T23:15:41.767+00:00|INFO|SessionData|http-nio-6969-exec-10] updating DB group testGroup 23:17:10 policy-pap | [2024-03-09T23:15:41.784+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-10] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeA, policy=onap.restart.tca 1.0.0, action=DEPLOYMENT, timestamp=2024-03-09T23:15:41Z, user=policyadmin), PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeC, policy=operational.apex.decisionMaker 1.0.0, action=DEPLOYMENT, timestamp=2024-03-09T23:15:41Z, user=policyadmin)] 23:17:10 policy-pap | [2024-03-09T23:15:42.497+00:00|INFO|SessionData|http-nio-6969-exec-5] cache group testGroup 23:17:10 policy-pap | [2024-03-09T23:15:42.498+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-5] remove policy onap.restart.tca 1.0.0 from subgroup testGroup pdpTypeA count=0 23:17:10 policy-pap | [2024-03-09T23:15:42.498+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-5] Registering an undeploy for policy onap.restart.tca 1.0.0 23:17:10 policy-pap | [2024-03-09T23:15:42.499+00:00|INFO|SessionData|http-nio-6969-exec-5] update cached group testGroup 23:17:10 policy-pap | [2024-03-09T23:15:42.499+00:00|INFO|SessionData|http-nio-6969-exec-5] updating DB group testGroup 23:17:10 policy-pap | [2024-03-09T23:15:42.514+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-5] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeA, policy=onap.restart.tca 1.0.0, action=UNDEPLOYMENT, timestamp=2024-03-09T23:15:42Z, user=policyadmin)] 23:17:10 policy-pap | [2024-03-09T23:15:42.899+00:00|INFO|SessionData|http-nio-6969-exec-6] cache group defaultGroup 23:17:10 policy-pap | [2024-03-09T23:15:42.899+00:00|INFO|SessionData|http-nio-6969-exec-6] cache group testGroup 23:17:10 policy-pap | [2024-03-09T23:15:42.899+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-6] remove policy operational.apex.decisionMaker 1.0.0 from subgroup testGroup pdpTypeC count=0 23:17:10 policy-pap | [2024-03-09T23:15:42.899+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-6] Registering an undeploy for policy operational.apex.decisionMaker 1.0.0 23:17:10 policy-pap | [2024-03-09T23:15:42.899+00:00|INFO|SessionData|http-nio-6969-exec-6] update cached group testGroup 23:17:10 policy-pap | [2024-03-09T23:15:42.900+00:00|INFO|SessionData|http-nio-6969-exec-6] updating DB group testGroup 23:17:10 policy-pap | [2024-03-09T23:15:42.919+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-6] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeC, policy=operational.apex.decisionMaker 1.0.0, action=UNDEPLOYMENT, timestamp=2024-03-09T23:15:42Z, user=policyadmin)] 23:17:10 policy-pap | [2024-03-09T23:16:03.404+00:00|INFO|TimerManager|Thread-9] update timer discarded (expired) Timer [name=677fe6da-3e09-4ff6-a3fe-8f782868df46, expireMs=1710026163403] 23:17:10 policy-pap | [2024-03-09T23:16:03.518+00:00|INFO|SessionData|http-nio-6969-exec-10] cache group testGroup 23:17:10 policy-pap | [2024-03-09T23:16:03.521+00:00|INFO|SessionData|http-nio-6969-exec-10] deleting DB group testGroup 23:17:10 policy-pap | [2024-03-09T23:16:03.525+00:00|INFO|TimerManager|Thread-10] state-change timer discarded (expired) Timer [name=b376ce6a-be29-4c6f-9177-d857e98b4d69, expireMs=1710026163525] 23:17:10 kafka | [2024-03-09 23:15:11,986] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,986] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,987] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,987] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,987] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,987] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,987] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,987] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,988] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,988] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,988] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,988] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,988] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,988] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,988] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,989] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,989] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,989] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,989] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,989] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | > upgrade 1020-FK_ToscaServiceTemplate_policyTypesName.sql 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_policyTypesName FOREIGN KEY (policyTypesName, policyTypesVersion) REFERENCES toscapolicytypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | > upgrade 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_relationshipTypesName FOREIGN KEY (relationshipTypesName, relationshipTypesVersion) REFERENCES toscarelationshiptypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | > upgrade 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | ALTER TABLE toscatopologytemplate ADD CONSTRAINT FK_ToscaTopologyTemplate_nodeTemplatesName FOREIGN KEY (nodeTemplatesName, nodeTemplatessVersion) REFERENCES toscanodetemplates (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | > upgrade 1050-FK_ToscaTopologyTemplate_policyName.sql 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | ALTER TABLE toscatopologytemplate ADD CONSTRAINT FK_ToscaTopologyTemplate_policyName FOREIGN KEY (policyName, policyVersion) REFERENCES toscapolicies (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | > upgrade 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT TscaServiceTemplatetopologyTemplateParentLocalName FOREIGN KEY (topologyTemplateParentLocalName, topologyTemplateLocalName, topologyTemplateParentKeyVersion, topologyTemplateParentKeyName) REFERENCES toscatopologytemplate (parentLocalName, localName, parentKeyVersion, parentKeyName) ON UPDATE RESTRICT ON DELETE RESTRICT 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | > upgrade 0100-pdp.sql 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | ALTER TABLE pdp ADD COLUMN LASTUPDATE datetime NOT NULL DEFAULT CURRENT_TIMESTAMP AFTER HEALTHY 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | > upgrade 0110-idx_tsidx1.sql 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | CREATE INDEX IDX_TSIDX1 ON pdpstatistics(timeStamp, name, version) 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | > upgrade 0120-pk_pdpstatistics.sql 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | ALTER TABLE pdpstatistics DROP PRIMARY KEY 23:17:10 kafka | [2024-03-09 23:15:11,989] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,989] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,990] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,990] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,990] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,990] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,990] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,990] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,991] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,991] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,991] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,991] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,991] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,991] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:11,992] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,039] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-3 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,039] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-18 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,039] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-41 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,039] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-10 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,039] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-33 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,039] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-48 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,039] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-19 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,039] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-34 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,039] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-4 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,039] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-11 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,040] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-26 (state.change.logger) 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | > upgrade 0130-pdpstatistics.sql 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | ALTER TABLE pdpstatistics ADD COLUMN POLICYUNDEPLOYCOUNT BIGINT DEFAULT NULL AFTER POLICYEXECUTEDSUCCESSCOUNT, ADD COLUMN POLICYUNDEPLOYFAILCOUNT BIGINT DEFAULT NULL, ADD COLUMN POLICYUNDEPLOYSUCCESSCOUNT BIGINT DEFAULT NULL, ADD COLUMN ID BIGINT NOT NULL 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | > upgrade 0140-pk_pdpstatistics.sql 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | UPDATE pdpstatistics as p JOIN (SELECT name, version, timeStamp, ROW_NUMBER() OVER (ORDER BY timeStamp ASC) AS row_num FROM pdpstatistics GROUP BY name, version, timeStamp) AS t ON (p.name=t.name AND p.version=t.version AND p.timeStamp = t.timeStamp) SET p.id=t.row_num 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | ALTER TABLE pdpstatistics ADD CONSTRAINT PK_PDPSTATISTICS PRIMARY KEY (ID, name, version) 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | > upgrade 0150-pdpstatistics.sql 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | ALTER TABLE pdpstatistics MODIFY COLUMN timeStamp datetime(6) NULL 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | > upgrade 0160-jpapdpstatistics_enginestats.sql 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | ALTER TABLE jpapdpstatistics_enginestats ADD COLUMN ID BIGINT DEFAULT NULL AFTER UPTIME 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | > upgrade 0170-jpapdpstatistics_enginestats.sql 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | UPDATE jpapdpstatistics_enginestats a 23:17:10 policy-db-migrator | JOIN pdpstatistics b 23:17:10 policy-db-migrator | ON a.name = b.name AND a.version = b.version AND a.timeStamp = b.timeStamp 23:17:10 policy-db-migrator | SET a.id = b.id 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | > upgrade 0180-jpapdpstatistics_enginestats.sql 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | ALTER TABLE jpapdpstatistics_enginestats DROP COLUMN timeStamp 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | > upgrade 0190-jpapolicyaudit.sql 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapolicyaudit (ACTION INT DEFAULT NULL, PDPGROUP VARCHAR(255) NULL, PDPTYPE VARCHAR(255) NULL, TIMESTAMP datetime DEFAULT NULL, USER VARCHAR(255) NULL, ID BIGINT NOT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_JPAPOLICYAUDIT (ID, name, version)) 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | > upgrade 0200-JpaPolicyAuditIndex_timestamp.sql 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | CREATE INDEX JpaPolicyAuditIndex_timestamp ON jpapolicyaudit(TIMESTAMP) 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | > upgrade 0210-sequence.sql 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | CREATE TABLE IF NOT EXISTS sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | > upgrade 0220-sequence.sql 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | INSERT INTO sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM pdpstatistics)) 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | > upgrade 0100-jpatoscapolicy_targets.sql 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | ALTER TABLE jpatoscapolicy_targets ADD COLUMN toscaPolicyName VARCHAR(120) NOT NULL, ADD COLUMN toscaPolicyVersion VARCHAR(20) NOT NULL, ADD CONSTRAINT PK_JPATOSCAPOLICY_TARGETS PRIMARY KEY (toscaPolicyName, toscaPolicyVersion) 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | > upgrade 0110-jpatoscapolicytype_targets.sql 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | ALTER TABLE jpatoscapolicytype_targets ADD COLUMN toscaPolicyTypeName VARCHAR(120) NOT NULL, ADD COLUMN toscaPolicyTypeVersion VARCHAR(20) NOT NULL, ADD CONSTRAINT PK_JPATOSCAPOLICYTYPE_TARGETS PRIMARY KEY (toscaPolicyTypeName, toscaPolicyTypeVersion) 23:17:10 kafka | [2024-03-09 23:15:12,040] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-49 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,040] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-39 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,040] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-9 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,040] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-24 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,040] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-31 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,040] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-46 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,040] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-1 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,040] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-16 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,040] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-2 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,040] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-25 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,040] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-40 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,040] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-47 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,040] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-17 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,040] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-32 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,040] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-37 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,040] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-7 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,040] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-22 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,040] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-29 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,040] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-44 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,040] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-14 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,040] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-23 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,040] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-38 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,040] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-8 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,040] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,040] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-45 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,040] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-15 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,040] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-30 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,040] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-0 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,040] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-35 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,040] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-5 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,040] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-20 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,040] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-27 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,040] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-42 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,041] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-12 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,041] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-21 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,041] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-36 (state.change.logger) 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | > upgrade 0120-toscatrigger.sql 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | DROP TABLE IF EXISTS toscatrigger 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | > upgrade 0130-jpatoscapolicytype_triggers.sql 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | ALTER TABLE jpatoscapolicytype_triggers MODIFY COLUMN triggers LONGBLOB 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | 23:17:10 kafka | [2024-03-09 23:15:12,041] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-6 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,041] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-43 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,041] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-13 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,041] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-28 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,043] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions HashSet(__consumer_offsets-22, __consumer_offsets-30, __consumer_offsets-25, __consumer_offsets-35, __consumer_offsets-38, __consumer_offsets-13, __consumer_offsets-8, __consumer_offsets-21, __consumer_offsets-4, __consumer_offsets-27, __consumer_offsets-7, __consumer_offsets-9, __consumer_offsets-46, __consumer_offsets-41, __consumer_offsets-33, __consumer_offsets-23, __consumer_offsets-49, __consumer_offsets-47, __consumer_offsets-16, __consumer_offsets-28, __consumer_offsets-31, __consumer_offsets-36, __consumer_offsets-42, __consumer_offsets-3, __consumer_offsets-18, __consumer_offsets-37, policy-pdp-pap-0, __consumer_offsets-15, __consumer_offsets-24, __consumer_offsets-17, __consumer_offsets-48, __consumer_offsets-19, __consumer_offsets-11, __consumer_offsets-2, __consumer_offsets-43, __consumer_offsets-6, __consumer_offsets-14, __consumer_offsets-20, __consumer_offsets-0, __consumer_offsets-44, __consumer_offsets-39, __consumer_offsets-12, __consumer_offsets-45, __consumer_offsets-1, __consumer_offsets-5, __consumer_offsets-26, __consumer_offsets-29, __consumer_offsets-34, __consumer_offsets-10, __consumer_offsets-32, __consumer_offsets-40) (kafka.server.ReplicaFetcherManager) 23:17:10 kafka | [2024-03-09 23:15:12,043] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 1 from controller 1 epoch 1 as part of the become-leader transition for 51 partitions (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,089] INFO [LogLoader partition=__consumer_offsets-3, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:10 kafka | [2024-03-09 23:15:12,100] INFO Created log for partition __consumer_offsets-3 in /var/lib/kafka/data/__consumer_offsets-3 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:10 kafka | [2024-03-09 23:15:12,104] INFO [Partition __consumer_offsets-3 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-3 (kafka.cluster.Partition) 23:17:10 kafka | [2024-03-09 23:15:12,104] INFO [Partition __consumer_offsets-3 broker=1] Log loaded for partition __consumer_offsets-3 with initial high watermark 0 (kafka.cluster.Partition) 23:17:10 kafka | [2024-03-09 23:15:12,106] INFO [Broker id=1] Leader __consumer_offsets-3 with topic id Some(wySMvCKtSlSRxF3RSTLuwA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,119] INFO [LogLoader partition=__consumer_offsets-18, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:10 kafka | [2024-03-09 23:15:12,120] INFO Created log for partition __consumer_offsets-18 in /var/lib/kafka/data/__consumer_offsets-18 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:10 kafka | [2024-03-09 23:15:12,120] INFO [Partition __consumer_offsets-18 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-18 (kafka.cluster.Partition) 23:17:10 kafka | [2024-03-09 23:15:12,120] INFO [Partition __consumer_offsets-18 broker=1] Log loaded for partition __consumer_offsets-18 with initial high watermark 0 (kafka.cluster.Partition) 23:17:10 kafka | [2024-03-09 23:15:12,120] INFO [Broker id=1] Leader __consumer_offsets-18 with topic id Some(wySMvCKtSlSRxF3RSTLuwA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,132] INFO [LogLoader partition=__consumer_offsets-41, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:10 kafka | [2024-03-09 23:15:12,133] INFO Created log for partition __consumer_offsets-41 in /var/lib/kafka/data/__consumer_offsets-41 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:10 kafka | [2024-03-09 23:15:12,133] INFO [Partition __consumer_offsets-41 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-41 (kafka.cluster.Partition) 23:17:10 kafka | [2024-03-09 23:15:12,133] INFO [Partition __consumer_offsets-41 broker=1] Log loaded for partition __consumer_offsets-41 with initial high watermark 0 (kafka.cluster.Partition) 23:17:10 kafka | [2024-03-09 23:15:12,134] INFO [Broker id=1] Leader __consumer_offsets-41 with topic id Some(wySMvCKtSlSRxF3RSTLuwA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,145] INFO [LogLoader partition=__consumer_offsets-10, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:10 kafka | [2024-03-09 23:15:12,146] INFO Created log for partition __consumer_offsets-10 in /var/lib/kafka/data/__consumer_offsets-10 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:10 kafka | [2024-03-09 23:15:12,146] INFO [Partition __consumer_offsets-10 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-10 (kafka.cluster.Partition) 23:17:10 kafka | [2024-03-09 23:15:12,146] INFO [Partition __consumer_offsets-10 broker=1] Log loaded for partition __consumer_offsets-10 with initial high watermark 0 (kafka.cluster.Partition) 23:17:10 kafka | [2024-03-09 23:15:12,147] INFO [Broker id=1] Leader __consumer_offsets-10 with topic id Some(wySMvCKtSlSRxF3RSTLuwA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,157] INFO [LogLoader partition=__consumer_offsets-33, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:10 kafka | [2024-03-09 23:15:12,157] INFO Created log for partition __consumer_offsets-33 in /var/lib/kafka/data/__consumer_offsets-33 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:10 kafka | [2024-03-09 23:15:12,157] INFO [Partition __consumer_offsets-33 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-33 (kafka.cluster.Partition) 23:17:10 kafka | [2024-03-09 23:15:12,158] INFO [Partition __consumer_offsets-33 broker=1] Log loaded for partition __consumer_offsets-33 with initial high watermark 0 (kafka.cluster.Partition) 23:17:10 kafka | [2024-03-09 23:15:12,158] INFO [Broker id=1] Leader __consumer_offsets-33 with topic id Some(wySMvCKtSlSRxF3RSTLuwA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,167] INFO [LogLoader partition=__consumer_offsets-48, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:10 kafka | [2024-03-09 23:15:12,168] INFO Created log for partition __consumer_offsets-48 in /var/lib/kafka/data/__consumer_offsets-48 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:10 kafka | [2024-03-09 23:15:12,168] INFO [Partition __consumer_offsets-48 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-48 (kafka.cluster.Partition) 23:17:10 kafka | [2024-03-09 23:15:12,168] INFO [Partition __consumer_offsets-48 broker=1] Log loaded for partition __consumer_offsets-48 with initial high watermark 0 (kafka.cluster.Partition) 23:17:10 kafka | [2024-03-09 23:15:12,168] INFO [Broker id=1] Leader __consumer_offsets-48 with topic id Some(wySMvCKtSlSRxF3RSTLuwA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,177] INFO [LogLoader partition=__consumer_offsets-19, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:10 kafka | [2024-03-09 23:15:12,177] INFO Created log for partition __consumer_offsets-19 in /var/lib/kafka/data/__consumer_offsets-19 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:10 kafka | [2024-03-09 23:15:12,177] INFO [Partition __consumer_offsets-19 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-19 (kafka.cluster.Partition) 23:17:10 kafka | [2024-03-09 23:15:12,177] INFO [Partition __consumer_offsets-19 broker=1] Log loaded for partition __consumer_offsets-19 with initial high watermark 0 (kafka.cluster.Partition) 23:17:10 kafka | [2024-03-09 23:15:12,178] INFO [Broker id=1] Leader __consumer_offsets-19 with topic id Some(wySMvCKtSlSRxF3RSTLuwA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,186] INFO [LogLoader partition=__consumer_offsets-34, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:10 kafka | [2024-03-09 23:15:12,187] INFO Created log for partition __consumer_offsets-34 in /var/lib/kafka/data/__consumer_offsets-34 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:10 kafka | [2024-03-09 23:15:12,187] INFO [Partition __consumer_offsets-34 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-34 (kafka.cluster.Partition) 23:17:10 kafka | [2024-03-09 23:15:12,187] INFO [Partition __consumer_offsets-34 broker=1] Log loaded for partition __consumer_offsets-34 with initial high watermark 0 (kafka.cluster.Partition) 23:17:10 kafka | [2024-03-09 23:15:12,187] INFO [Broker id=1] Leader __consumer_offsets-34 with topic id Some(wySMvCKtSlSRxF3RSTLuwA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,195] INFO [LogLoader partition=__consumer_offsets-4, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:10 kafka | [2024-03-09 23:15:12,195] INFO Created log for partition __consumer_offsets-4 in /var/lib/kafka/data/__consumer_offsets-4 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:10 kafka | [2024-03-09 23:15:12,195] INFO [Partition __consumer_offsets-4 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-4 (kafka.cluster.Partition) 23:17:10 kafka | [2024-03-09 23:15:12,195] INFO [Partition __consumer_offsets-4 broker=1] Log loaded for partition __consumer_offsets-4 with initial high watermark 0 (kafka.cluster.Partition) 23:17:10 kafka | [2024-03-09 23:15:12,195] INFO [Broker id=1] Leader __consumer_offsets-4 with topic id Some(wySMvCKtSlSRxF3RSTLuwA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,204] INFO [LogLoader partition=__consumer_offsets-11, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:10 kafka | [2024-03-09 23:15:12,205] INFO Created log for partition __consumer_offsets-11 in /var/lib/kafka/data/__consumer_offsets-11 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:10 kafka | [2024-03-09 23:15:12,205] INFO [Partition __consumer_offsets-11 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-11 (kafka.cluster.Partition) 23:17:10 kafka | [2024-03-09 23:15:12,205] INFO [Partition __consumer_offsets-11 broker=1] Log loaded for partition __consumer_offsets-11 with initial high watermark 0 (kafka.cluster.Partition) 23:17:10 kafka | [2024-03-09 23:15:12,205] INFO [Broker id=1] Leader __consumer_offsets-11 with topic id Some(wySMvCKtSlSRxF3RSTLuwA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,211] INFO [LogLoader partition=__consumer_offsets-26, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:10 kafka | [2024-03-09 23:15:12,211] INFO Created log for partition __consumer_offsets-26 in /var/lib/kafka/data/__consumer_offsets-26 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:10 kafka | [2024-03-09 23:15:12,211] INFO [Partition __consumer_offsets-26 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-26 (kafka.cluster.Partition) 23:17:10 kafka | [2024-03-09 23:15:12,211] INFO [Partition __consumer_offsets-26 broker=1] Log loaded for partition __consumer_offsets-26 with initial high watermark 0 (kafka.cluster.Partition) 23:17:10 kafka | [2024-03-09 23:15:12,211] INFO [Broker id=1] Leader __consumer_offsets-26 with topic id Some(wySMvCKtSlSRxF3RSTLuwA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,217] INFO [LogLoader partition=__consumer_offsets-49, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:10 kafka | [2024-03-09 23:15:12,218] INFO Created log for partition __consumer_offsets-49 in /var/lib/kafka/data/__consumer_offsets-49 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:10 kafka | [2024-03-09 23:15:12,218] INFO [Partition __consumer_offsets-49 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-49 (kafka.cluster.Partition) 23:17:10 kafka | [2024-03-09 23:15:12,218] INFO [Partition __consumer_offsets-49 broker=1] Log loaded for partition __consumer_offsets-49 with initial high watermark 0 (kafka.cluster.Partition) 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | > upgrade 0140-toscaparameter.sql 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | DROP TABLE IF EXISTS toscaparameter 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | > upgrade 0150-toscaproperty.sql 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | DROP TABLE IF EXISTS jpatoscaproperty_constraints 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | DROP TABLE IF EXISTS jpatoscaproperty_metadata 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | DROP TABLE IF EXISTS toscaproperty 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | > upgrade 0160-jpapolicyaudit_pk.sql 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | ALTER TABLE jpapolicyaudit DROP PRIMARY KEY 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | ALTER TABLE jpapolicyaudit ADD CONSTRAINT PK_JPAPOLICYAUDIT PRIMARY KEY (ID) 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | > upgrade 0170-pdpstatistics_pk.sql 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | ALTER TABLE pdpstatistics DROP PRIMARY KEY 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | ALTER TABLE pdpstatistics ADD CONSTRAINT PK_PDPSTATISTICS PRIMARY KEY (ID) 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | > upgrade 0180-jpatoscanodetemplate_metadata.sql 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | ALTER TABLE jpatoscanodetemplate_metadata MODIFY COLUMN METADATA LONGTEXT 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | > upgrade 0100-upgrade.sql 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | select 'upgrade to 1100 completed' as msg 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | msg 23:17:10 policy-db-migrator | upgrade to 1100 completed 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | > upgrade 0100-jpapolicyaudit_renameuser.sql 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | ALTER TABLE jpapolicyaudit RENAME COLUMN USER TO USERNAME 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | > upgrade 0110-idx_tsidx1.sql 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | DROP INDEX IDX_TSIDX1 ON pdpstatistics 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | CREATE INDEX IDXTSIDX1 ON pdpstatistics(timeStamp, name, version) 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | > upgrade 0120-audit_sequence.sql 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | CREATE TABLE IF NOT EXISTS audit_sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | 23:17:10 kafka | [2024-03-09 23:15:12,218] INFO [Broker id=1] Leader __consumer_offsets-49 with topic id Some(wySMvCKtSlSRxF3RSTLuwA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,225] INFO [LogLoader partition=__consumer_offsets-39, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:10 kafka | [2024-03-09 23:15:12,226] INFO Created log for partition __consumer_offsets-39 in /var/lib/kafka/data/__consumer_offsets-39 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:10 kafka | [2024-03-09 23:15:12,226] INFO [Partition __consumer_offsets-39 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-39 (kafka.cluster.Partition) 23:17:10 kafka | [2024-03-09 23:15:12,226] INFO [Partition __consumer_offsets-39 broker=1] Log loaded for partition __consumer_offsets-39 with initial high watermark 0 (kafka.cluster.Partition) 23:17:10 kafka | [2024-03-09 23:15:12,226] INFO [Broker id=1] Leader __consumer_offsets-39 with topic id Some(wySMvCKtSlSRxF3RSTLuwA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,233] INFO [LogLoader partition=__consumer_offsets-9, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:10 kafka | [2024-03-09 23:15:12,234] INFO Created log for partition __consumer_offsets-9 in /var/lib/kafka/data/__consumer_offsets-9 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:10 kafka | [2024-03-09 23:15:12,234] INFO [Partition __consumer_offsets-9 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-9 (kafka.cluster.Partition) 23:17:10 kafka | [2024-03-09 23:15:12,234] INFO [Partition __consumer_offsets-9 broker=1] Log loaded for partition __consumer_offsets-9 with initial high watermark 0 (kafka.cluster.Partition) 23:17:10 kafka | [2024-03-09 23:15:12,234] INFO [Broker id=1] Leader __consumer_offsets-9 with topic id Some(wySMvCKtSlSRxF3RSTLuwA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,241] INFO [LogLoader partition=__consumer_offsets-24, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:10 kafka | [2024-03-09 23:15:12,242] INFO Created log for partition __consumer_offsets-24 in /var/lib/kafka/data/__consumer_offsets-24 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:10 kafka | [2024-03-09 23:15:12,242] INFO [Partition __consumer_offsets-24 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-24 (kafka.cluster.Partition) 23:17:10 kafka | [2024-03-09 23:15:12,242] INFO [Partition __consumer_offsets-24 broker=1] Log loaded for partition __consumer_offsets-24 with initial high watermark 0 (kafka.cluster.Partition) 23:17:10 kafka | [2024-03-09 23:15:12,242] INFO [Broker id=1] Leader __consumer_offsets-24 with topic id Some(wySMvCKtSlSRxF3RSTLuwA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,249] INFO [LogLoader partition=__consumer_offsets-31, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:10 kafka | [2024-03-09 23:15:12,250] INFO Created log for partition __consumer_offsets-31 in /var/lib/kafka/data/__consumer_offsets-31 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:10 kafka | [2024-03-09 23:15:12,250] INFO [Partition __consumer_offsets-31 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-31 (kafka.cluster.Partition) 23:17:10 kafka | [2024-03-09 23:15:12,250] INFO [Partition __consumer_offsets-31 broker=1] Log loaded for partition __consumer_offsets-31 with initial high watermark 0 (kafka.cluster.Partition) 23:17:10 kafka | [2024-03-09 23:15:12,250] INFO [Broker id=1] Leader __consumer_offsets-31 with topic id Some(wySMvCKtSlSRxF3RSTLuwA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,259] INFO [LogLoader partition=__consumer_offsets-46, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:10 kafka | [2024-03-09 23:15:12,260] INFO Created log for partition __consumer_offsets-46 in /var/lib/kafka/data/__consumer_offsets-46 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:10 kafka | [2024-03-09 23:15:12,260] INFO [Partition __consumer_offsets-46 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-46 (kafka.cluster.Partition) 23:17:10 kafka | [2024-03-09 23:15:12,260] INFO [Partition __consumer_offsets-46 broker=1] Log loaded for partition __consumer_offsets-46 with initial high watermark 0 (kafka.cluster.Partition) 23:17:10 kafka | [2024-03-09 23:15:12,260] INFO [Broker id=1] Leader __consumer_offsets-46 with topic id Some(wySMvCKtSlSRxF3RSTLuwA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,269] INFO [LogLoader partition=__consumer_offsets-1, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:10 kafka | [2024-03-09 23:15:12,270] INFO Created log for partition __consumer_offsets-1 in /var/lib/kafka/data/__consumer_offsets-1 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:10 kafka | [2024-03-09 23:15:12,270] INFO [Partition __consumer_offsets-1 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-1 (kafka.cluster.Partition) 23:17:10 kafka | [2024-03-09 23:15:12,270] INFO [Partition __consumer_offsets-1 broker=1] Log loaded for partition __consumer_offsets-1 with initial high watermark 0 (kafka.cluster.Partition) 23:17:10 kafka | [2024-03-09 23:15:12,271] INFO [Broker id=1] Leader __consumer_offsets-1 with topic id Some(wySMvCKtSlSRxF3RSTLuwA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,278] INFO [LogLoader partition=__consumer_offsets-16, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:10 kafka | [2024-03-09 23:15:12,278] INFO Created log for partition __consumer_offsets-16 in /var/lib/kafka/data/__consumer_offsets-16 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:10 kafka | [2024-03-09 23:15:12,278] INFO [Partition __consumer_offsets-16 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-16 (kafka.cluster.Partition) 23:17:10 kafka | [2024-03-09 23:15:12,279] INFO [Partition __consumer_offsets-16 broker=1] Log loaded for partition __consumer_offsets-16 with initial high watermark 0 (kafka.cluster.Partition) 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | INSERT INTO audit_sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM jpapolicyaudit)) 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | > upgrade 0130-statistics_sequence.sql 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | CREATE TABLE IF NOT EXISTS statistics_sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | INSERT INTO statistics_sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM pdpstatistics)) 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | TRUNCATE TABLE sequence 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | > upgrade 0100-pdpstatistics.sql 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | DROP INDEX IDXTSIDX1 ON pdpstatistics 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | DROP TABLE pdpstatistics 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | DROP TABLE jpapdpstatistics_enginestats 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | > upgrade 0120-statistics_sequence.sql 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | DROP TABLE statistics_sequence 23:17:10 policy-db-migrator | -------------- 23:17:10 policy-db-migrator | 23:17:10 policy-db-migrator | policyadmin: OK: upgrade (1300) 23:17:10 policy-db-migrator | name version 23:17:10 policy-db-migrator | policyadmin 1300 23:17:10 policy-db-migrator | ID script operation from_version to_version tag success atTime 23:17:10 policy-db-migrator | 1 0100-jpapdpgroup_properties.sql upgrade 0 0800 0903242314290800u 1 2024-03-09 23:14:29 23:17:10 policy-db-migrator | 2 0110-jpapdpstatistics_enginestats.sql upgrade 0 0800 0903242314290800u 1 2024-03-09 23:14:30 23:17:10 policy-db-migrator | 3 0120-jpapdpsubgroup_policies.sql upgrade 0 0800 0903242314290800u 1 2024-03-09 23:14:30 23:17:10 policy-db-migrator | 4 0130-jpapdpsubgroup_properties.sql upgrade 0 0800 0903242314290800u 1 2024-03-09 23:14:30 23:17:10 policy-db-migrator | 5 0140-jpapdpsubgroup_supportedpolicytypes.sql upgrade 0 0800 0903242314290800u 1 2024-03-09 23:14:30 23:17:10 policy-db-migrator | 6 0150-jpatoscacapabilityassignment_attributes.sql upgrade 0 0800 0903242314290800u 1 2024-03-09 23:14:30 23:17:10 policy-db-migrator | 7 0160-jpatoscacapabilityassignment_metadata.sql upgrade 0 0800 0903242314290800u 1 2024-03-09 23:14:30 23:17:10 policy-db-migrator | 8 0170-jpatoscacapabilityassignment_occurrences.sql upgrade 0 0800 0903242314290800u 1 2024-03-09 23:14:31 23:17:10 policy-db-migrator | 9 0180-jpatoscacapabilityassignment_properties.sql upgrade 0 0800 0903242314290800u 1 2024-03-09 23:14:31 23:17:10 policy-db-migrator | 10 0190-jpatoscacapabilitytype_metadata.sql upgrade 0 0800 0903242314290800u 1 2024-03-09 23:14:31 23:17:10 policy-db-migrator | 11 0200-jpatoscacapabilitytype_properties.sql upgrade 0 0800 0903242314290800u 1 2024-03-09 23:14:31 23:17:10 policy-db-migrator | 12 0210-jpatoscadatatype_constraints.sql upgrade 0 0800 0903242314290800u 1 2024-03-09 23:14:31 23:17:10 policy-db-migrator | 13 0220-jpatoscadatatype_metadata.sql upgrade 0 0800 0903242314290800u 1 2024-03-09 23:14:31 23:17:10 policy-db-migrator | 14 0230-jpatoscadatatype_properties.sql upgrade 0 0800 0903242314290800u 1 2024-03-09 23:14:32 23:17:10 policy-db-migrator | 15 0240-jpatoscanodetemplate_metadata.sql upgrade 0 0800 0903242314290800u 1 2024-03-09 23:14:32 23:17:10 policy-db-migrator | 16 0250-jpatoscanodetemplate_properties.sql upgrade 0 0800 0903242314290800u 1 2024-03-09 23:14:32 23:17:10 policy-db-migrator | 17 0260-jpatoscanodetype_metadata.sql upgrade 0 0800 0903242314290800u 1 2024-03-09 23:14:32 23:17:10 policy-db-migrator | 18 0270-jpatoscanodetype_properties.sql upgrade 0 0800 0903242314290800u 1 2024-03-09 23:14:32 23:17:10 policy-db-migrator | 19 0280-jpatoscapolicy_metadata.sql upgrade 0 0800 0903242314290800u 1 2024-03-09 23:14:33 23:17:10 policy-db-migrator | 20 0290-jpatoscapolicy_properties.sql upgrade 0 0800 0903242314290800u 1 2024-03-09 23:14:33 23:17:10 policy-db-migrator | 21 0300-jpatoscapolicy_targets.sql upgrade 0 0800 0903242314290800u 1 2024-03-09 23:14:33 23:17:10 policy-db-migrator | 22 0310-jpatoscapolicytype_metadata.sql upgrade 0 0800 0903242314290800u 1 2024-03-09 23:14:33 23:17:10 policy-db-migrator | 23 0320-jpatoscapolicytype_properties.sql upgrade 0 0800 0903242314290800u 1 2024-03-09 23:14:33 23:17:10 policy-db-migrator | 24 0330-jpatoscapolicytype_targets.sql upgrade 0 0800 0903242314290800u 1 2024-03-09 23:14:33 23:17:10 policy-db-migrator | 25 0340-jpatoscapolicytype_triggers.sql upgrade 0 0800 0903242314290800u 1 2024-03-09 23:14:34 23:17:10 policy-db-migrator | 26 0350-jpatoscaproperty_constraints.sql upgrade 0 0800 0903242314290800u 1 2024-03-09 23:14:34 23:17:10 policy-db-migrator | 27 0360-jpatoscaproperty_metadata.sql upgrade 0 0800 0903242314290800u 1 2024-03-09 23:14:34 23:17:10 policy-db-migrator | 28 0370-jpatoscarelationshiptype_metadata.sql upgrade 0 0800 0903242314290800u 1 2024-03-09 23:14:34 23:17:10 policy-db-migrator | 29 0380-jpatoscarelationshiptype_properties.sql upgrade 0 0800 0903242314290800u 1 2024-03-09 23:14:34 23:17:10 policy-db-migrator | 30 0390-jpatoscarequirement_metadata.sql upgrade 0 0800 0903242314290800u 1 2024-03-09 23:14:35 23:17:10 kafka | [2024-03-09 23:15:12,279] INFO [Broker id=1] Leader __consumer_offsets-16 with topic id Some(wySMvCKtSlSRxF3RSTLuwA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,286] INFO [LogLoader partition=__consumer_offsets-2, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:10 kafka | [2024-03-09 23:15:12,292] INFO Created log for partition __consumer_offsets-2 in /var/lib/kafka/data/__consumer_offsets-2 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:10 kafka | [2024-03-09 23:15:12,292] INFO [Partition __consumer_offsets-2 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-2 (kafka.cluster.Partition) 23:17:10 kafka | [2024-03-09 23:15:12,292] INFO [Partition __consumer_offsets-2 broker=1] Log loaded for partition __consumer_offsets-2 with initial high watermark 0 (kafka.cluster.Partition) 23:17:10 kafka | [2024-03-09 23:15:12,293] INFO [Broker id=1] Leader __consumer_offsets-2 with topic id Some(wySMvCKtSlSRxF3RSTLuwA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,308] INFO [LogLoader partition=__consumer_offsets-25, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:10 kafka | [2024-03-09 23:15:12,309] INFO Created log for partition __consumer_offsets-25 in /var/lib/kafka/data/__consumer_offsets-25 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:10 kafka | [2024-03-09 23:15:12,309] INFO [Partition __consumer_offsets-25 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-25 (kafka.cluster.Partition) 23:17:10 kafka | [2024-03-09 23:15:12,310] INFO [Partition __consumer_offsets-25 broker=1] Log loaded for partition __consumer_offsets-25 with initial high watermark 0 (kafka.cluster.Partition) 23:17:10 kafka | [2024-03-09 23:15:12,310] INFO [Broker id=1] Leader __consumer_offsets-25 with topic id Some(wySMvCKtSlSRxF3RSTLuwA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,316] INFO [LogLoader partition=__consumer_offsets-40, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:10 kafka | [2024-03-09 23:15:12,317] INFO Created log for partition __consumer_offsets-40 in /var/lib/kafka/data/__consumer_offsets-40 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:10 kafka | [2024-03-09 23:15:12,317] INFO [Partition __consumer_offsets-40 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-40 (kafka.cluster.Partition) 23:17:10 kafka | [2024-03-09 23:15:12,317] INFO [Partition __consumer_offsets-40 broker=1] Log loaded for partition __consumer_offsets-40 with initial high watermark 0 (kafka.cluster.Partition) 23:17:10 kafka | [2024-03-09 23:15:12,317] INFO [Broker id=1] Leader __consumer_offsets-40 with topic id Some(wySMvCKtSlSRxF3RSTLuwA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,325] INFO [LogLoader partition=__consumer_offsets-47, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:10 kafka | [2024-03-09 23:15:12,326] INFO Created log for partition __consumer_offsets-47 in /var/lib/kafka/data/__consumer_offsets-47 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:10 kafka | [2024-03-09 23:15:12,326] INFO [Partition __consumer_offsets-47 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-47 (kafka.cluster.Partition) 23:17:10 kafka | [2024-03-09 23:15:12,326] INFO [Partition __consumer_offsets-47 broker=1] Log loaded for partition __consumer_offsets-47 with initial high watermark 0 (kafka.cluster.Partition) 23:17:10 kafka | [2024-03-09 23:15:12,326] INFO [Broker id=1] Leader __consumer_offsets-47 with topic id Some(wySMvCKtSlSRxF3RSTLuwA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,339] INFO [LogLoader partition=__consumer_offsets-17, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:10 kafka | [2024-03-09 23:15:12,340] INFO Created log for partition __consumer_offsets-17 in /var/lib/kafka/data/__consumer_offsets-17 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:10 kafka | [2024-03-09 23:15:12,340] INFO [Partition __consumer_offsets-17 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-17 (kafka.cluster.Partition) 23:17:10 kafka | [2024-03-09 23:15:12,340] INFO [Partition __consumer_offsets-17 broker=1] Log loaded for partition __consumer_offsets-17 with initial high watermark 0 (kafka.cluster.Partition) 23:17:10 kafka | [2024-03-09 23:15:12,340] INFO [Broker id=1] Leader __consumer_offsets-17 with topic id Some(wySMvCKtSlSRxF3RSTLuwA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,348] INFO [LogLoader partition=__consumer_offsets-32, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:10 kafka | [2024-03-09 23:15:12,348] INFO Created log for partition __consumer_offsets-32 in /var/lib/kafka/data/__consumer_offsets-32 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:10 kafka | [2024-03-09 23:15:12,349] INFO [Partition __consumer_offsets-32 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-32 (kafka.cluster.Partition) 23:17:10 kafka | [2024-03-09 23:15:12,349] INFO [Partition __consumer_offsets-32 broker=1] Log loaded for partition __consumer_offsets-32 with initial high watermark 0 (kafka.cluster.Partition) 23:17:10 kafka | [2024-03-09 23:15:12,349] INFO [Broker id=1] Leader __consumer_offsets-32 with topic id Some(wySMvCKtSlSRxF3RSTLuwA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,357] INFO [LogLoader partition=__consumer_offsets-37, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:10 kafka | [2024-03-09 23:15:12,358] INFO Created log for partition __consumer_offsets-37 in /var/lib/kafka/data/__consumer_offsets-37 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:10 kafka | [2024-03-09 23:15:12,358] INFO [Partition __consumer_offsets-37 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-37 (kafka.cluster.Partition) 23:17:10 policy-db-migrator | 31 0400-jpatoscarequirement_occurrences.sql upgrade 0 0800 0903242314290800u 1 2024-03-09 23:14:35 23:17:10 policy-db-migrator | 32 0410-jpatoscarequirement_properties.sql upgrade 0 0800 0903242314290800u 1 2024-03-09 23:14:35 23:17:10 policy-db-migrator | 33 0420-jpatoscaservicetemplate_metadata.sql upgrade 0 0800 0903242314290800u 1 2024-03-09 23:14:35 23:17:10 policy-db-migrator | 34 0430-jpatoscatopologytemplate_inputs.sql upgrade 0 0800 0903242314290800u 1 2024-03-09 23:14:35 23:17:10 kafka | [2024-03-09 23:15:12,358] INFO [Partition __consumer_offsets-37 broker=1] Log loaded for partition __consumer_offsets-37 with initial high watermark 0 (kafka.cluster.Partition) 23:17:10 kafka | [2024-03-09 23:15:12,358] INFO [Broker id=1] Leader __consumer_offsets-37 with topic id Some(wySMvCKtSlSRxF3RSTLuwA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,380] INFO [LogLoader partition=__consumer_offsets-7, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:10 kafka | [2024-03-09 23:15:12,381] INFO Created log for partition __consumer_offsets-7 in /var/lib/kafka/data/__consumer_offsets-7 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:10 kafka | [2024-03-09 23:15:12,381] INFO [Partition __consumer_offsets-7 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-7 (kafka.cluster.Partition) 23:17:10 kafka | [2024-03-09 23:15:12,382] INFO [Partition __consumer_offsets-7 broker=1] Log loaded for partition __consumer_offsets-7 with initial high watermark 0 (kafka.cluster.Partition) 23:17:10 kafka | [2024-03-09 23:15:12,382] INFO [Broker id=1] Leader __consumer_offsets-7 with topic id Some(wySMvCKtSlSRxF3RSTLuwA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,389] INFO [LogLoader partition=__consumer_offsets-22, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:10 kafka | [2024-03-09 23:15:12,390] INFO Created log for partition __consumer_offsets-22 in /var/lib/kafka/data/__consumer_offsets-22 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:10 kafka | [2024-03-09 23:15:12,390] INFO [Partition __consumer_offsets-22 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-22 (kafka.cluster.Partition) 23:17:10 kafka | [2024-03-09 23:15:12,390] INFO [Partition __consumer_offsets-22 broker=1] Log loaded for partition __consumer_offsets-22 with initial high watermark 0 (kafka.cluster.Partition) 23:17:10 kafka | [2024-03-09 23:15:12,390] INFO [Broker id=1] Leader __consumer_offsets-22 with topic id Some(wySMvCKtSlSRxF3RSTLuwA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,395] INFO [LogLoader partition=__consumer_offsets-29, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:10 kafka | [2024-03-09 23:15:12,396] INFO Created log for partition __consumer_offsets-29 in /var/lib/kafka/data/__consumer_offsets-29 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:10 kafka | [2024-03-09 23:15:12,396] INFO [Partition __consumer_offsets-29 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-29 (kafka.cluster.Partition) 23:17:10 kafka | [2024-03-09 23:15:12,396] INFO [Partition __consumer_offsets-29 broker=1] Log loaded for partition __consumer_offsets-29 with initial high watermark 0 (kafka.cluster.Partition) 23:17:10 kafka | [2024-03-09 23:15:12,396] INFO [Broker id=1] Leader __consumer_offsets-29 with topic id Some(wySMvCKtSlSRxF3RSTLuwA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,401] INFO [LogLoader partition=__consumer_offsets-44, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:10 kafka | [2024-03-09 23:15:12,402] INFO Created log for partition __consumer_offsets-44 in /var/lib/kafka/data/__consumer_offsets-44 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:10 kafka | [2024-03-09 23:15:12,402] INFO [Partition __consumer_offsets-44 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-44 (kafka.cluster.Partition) 23:17:10 kafka | [2024-03-09 23:15:12,402] INFO [Partition __consumer_offsets-44 broker=1] Log loaded for partition __consumer_offsets-44 with initial high watermark 0 (kafka.cluster.Partition) 23:17:10 kafka | [2024-03-09 23:15:12,402] INFO [Broker id=1] Leader __consumer_offsets-44 with topic id Some(wySMvCKtSlSRxF3RSTLuwA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,408] INFO [LogLoader partition=__consumer_offsets-14, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:10 kafka | [2024-03-09 23:15:12,409] INFO Created log for partition __consumer_offsets-14 in /var/lib/kafka/data/__consumer_offsets-14 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:10 kafka | [2024-03-09 23:15:12,409] INFO [Partition __consumer_offsets-14 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-14 (kafka.cluster.Partition) 23:17:10 kafka | [2024-03-09 23:15:12,409] INFO [Partition __consumer_offsets-14 broker=1] Log loaded for partition __consumer_offsets-14 with initial high watermark 0 (kafka.cluster.Partition) 23:17:10 kafka | [2024-03-09 23:15:12,409] INFO [Broker id=1] Leader __consumer_offsets-14 with topic id Some(wySMvCKtSlSRxF3RSTLuwA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,418] INFO [LogLoader partition=__consumer_offsets-23, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:10 kafka | [2024-03-09 23:15:12,418] INFO Created log for partition __consumer_offsets-23 in /var/lib/kafka/data/__consumer_offsets-23 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:10 kafka | [2024-03-09 23:15:12,418] INFO [Partition __consumer_offsets-23 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-23 (kafka.cluster.Partition) 23:17:10 kafka | [2024-03-09 23:15:12,418] INFO [Partition __consumer_offsets-23 broker=1] Log loaded for partition __consumer_offsets-23 with initial high watermark 0 (kafka.cluster.Partition) 23:17:10 kafka | [2024-03-09 23:15:12,419] INFO [Broker id=1] Leader __consumer_offsets-23 with topic id Some(wySMvCKtSlSRxF3RSTLuwA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,427] INFO [LogLoader partition=__consumer_offsets-38, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:10 kafka | [2024-03-09 23:15:12,428] INFO Created log for partition __consumer_offsets-38 in /var/lib/kafka/data/__consumer_offsets-38 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:10 kafka | [2024-03-09 23:15:12,428] INFO [Partition __consumer_offsets-38 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-38 (kafka.cluster.Partition) 23:17:10 policy-db-migrator | 35 0440-pdpgroup_pdpsubgroup.sql upgrade 0 0800 0903242314290800u 1 2024-03-09 23:14:35 23:17:10 policy-db-migrator | 36 0450-pdpgroup.sql upgrade 0 0800 0903242314290800u 1 2024-03-09 23:14:35 23:17:10 policy-db-migrator | 37 0460-pdppolicystatus.sql upgrade 0 0800 0903242314290800u 1 2024-03-09 23:14:36 23:17:10 policy-db-migrator | 38 0470-pdp.sql upgrade 0 0800 0903242314290800u 1 2024-03-09 23:14:36 23:17:10 policy-db-migrator | 39 0480-pdpstatistics.sql upgrade 0 0800 0903242314290800u 1 2024-03-09 23:14:36 23:17:10 policy-db-migrator | 40 0490-pdpsubgroup_pdp.sql upgrade 0 0800 0903242314290800u 1 2024-03-09 23:14:36 23:17:10 policy-db-migrator | 41 0500-pdpsubgroup.sql upgrade 0 0800 0903242314290800u 1 2024-03-09 23:14:36 23:17:10 policy-db-migrator | 42 0510-toscacapabilityassignment.sql upgrade 0 0800 0903242314290800u 1 2024-03-09 23:14:36 23:17:10 policy-db-migrator | 43 0520-toscacapabilityassignments.sql upgrade 0 0800 0903242314290800u 1 2024-03-09 23:14:37 23:17:10 policy-db-migrator | 44 0530-toscacapabilityassignments_toscacapabilityassignment.sql upgrade 0 0800 0903242314290800u 1 2024-03-09 23:14:37 23:17:10 policy-db-migrator | 45 0540-toscacapabilitytype.sql upgrade 0 0800 0903242314290800u 1 2024-03-09 23:14:37 23:17:10 policy-db-migrator | 46 0550-toscacapabilitytypes.sql upgrade 0 0800 0903242314290800u 1 2024-03-09 23:14:37 23:17:10 policy-db-migrator | 47 0560-toscacapabilitytypes_toscacapabilitytype.sql upgrade 0 0800 0903242314290800u 1 2024-03-09 23:14:37 23:17:10 policy-db-migrator | 48 0570-toscadatatype.sql upgrade 0 0800 0903242314290800u 1 2024-03-09 23:14:37 23:17:10 policy-db-migrator | 49 0580-toscadatatypes.sql upgrade 0 0800 0903242314290800u 1 2024-03-09 23:14:37 23:17:10 policy-db-migrator | 50 0590-toscadatatypes_toscadatatype.sql upgrade 0 0800 0903242314290800u 1 2024-03-09 23:14:38 23:17:10 policy-db-migrator | 51 0600-toscanodetemplate.sql upgrade 0 0800 0903242314290800u 1 2024-03-09 23:14:38 23:17:10 policy-db-migrator | 52 0610-toscanodetemplates.sql upgrade 0 0800 0903242314290800u 1 2024-03-09 23:14:38 23:17:10 policy-db-migrator | 53 0620-toscanodetemplates_toscanodetemplate.sql upgrade 0 0800 0903242314290800u 1 2024-03-09 23:14:38 23:17:10 policy-db-migrator | 54 0630-toscanodetype.sql upgrade 0 0800 0903242314290800u 1 2024-03-09 23:14:38 23:17:10 policy-db-migrator | 55 0640-toscanodetypes.sql upgrade 0 0800 0903242314290800u 1 2024-03-09 23:14:39 23:17:10 policy-db-migrator | 56 0650-toscanodetypes_toscanodetype.sql upgrade 0 0800 0903242314290800u 1 2024-03-09 23:14:39 23:17:10 policy-db-migrator | 57 0660-toscaparameter.sql upgrade 0 0800 0903242314290800u 1 2024-03-09 23:14:39 23:17:10 policy-db-migrator | 58 0670-toscapolicies.sql upgrade 0 0800 0903242314290800u 1 2024-03-09 23:14:39 23:17:10 policy-db-migrator | 59 0680-toscapolicies_toscapolicy.sql upgrade 0 0800 0903242314290800u 1 2024-03-09 23:14:39 23:17:10 policy-db-migrator | 60 0690-toscapolicy.sql upgrade 0 0800 0903242314290800u 1 2024-03-09 23:14:39 23:17:10 policy-db-migrator | 61 0700-toscapolicytype.sql upgrade 0 0800 0903242314290800u 1 2024-03-09 23:14:40 23:17:10 policy-db-migrator | 62 0710-toscapolicytypes.sql upgrade 0 0800 0903242314290800u 1 2024-03-09 23:14:40 23:17:10 policy-db-migrator | 63 0720-toscapolicytypes_toscapolicytype.sql upgrade 0 0800 0903242314290800u 1 2024-03-09 23:14:40 23:17:10 policy-db-migrator | 64 0730-toscaproperty.sql upgrade 0 0800 0903242314290800u 1 2024-03-09 23:14:40 23:17:10 policy-db-migrator | 65 0740-toscarelationshiptype.sql upgrade 0 0800 0903242314290800u 1 2024-03-09 23:14:40 23:17:10 policy-db-migrator | 66 0750-toscarelationshiptypes.sql upgrade 0 0800 0903242314290800u 1 2024-03-09 23:14:40 23:17:10 policy-db-migrator | 67 0760-toscarelationshiptypes_toscarelationshiptype.sql upgrade 0 0800 0903242314290800u 1 2024-03-09 23:14:40 23:17:10 policy-db-migrator | 68 0770-toscarequirement.sql upgrade 0 0800 0903242314290800u 1 2024-03-09 23:14:41 23:17:10 policy-db-migrator | 69 0780-toscarequirements.sql upgrade 0 0800 0903242314290800u 1 2024-03-09 23:14:41 23:17:10 policy-db-migrator | 70 0790-toscarequirements_toscarequirement.sql upgrade 0 0800 0903242314290800u 1 2024-03-09 23:14:41 23:17:10 policy-db-migrator | 71 0800-toscaservicetemplate.sql upgrade 0 0800 0903242314290800u 1 2024-03-09 23:14:41 23:17:10 policy-db-migrator | 72 0810-toscatopologytemplate.sql upgrade 0 0800 0903242314290800u 1 2024-03-09 23:14:41 23:17:10 policy-db-migrator | 73 0820-toscatrigger.sql upgrade 0 0800 0903242314290800u 1 2024-03-09 23:14:41 23:17:10 policy-db-migrator | 74 0830-FK_ToscaNodeTemplate_capabilitiesName.sql upgrade 0 0800 0903242314290800u 1 2024-03-09 23:14:41 23:17:10 policy-db-migrator | 75 0840-FK_ToscaNodeTemplate_requirementsName.sql upgrade 0 0800 0903242314290800u 1 2024-03-09 23:14:41 23:17:10 policy-db-migrator | 76 0850-FK_ToscaNodeType_requirementsName.sql upgrade 0 0800 0903242314290800u 1 2024-03-09 23:14:41 23:17:10 policy-db-migrator | 77 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql upgrade 0 0800 0903242314290800u 1 2024-03-09 23:14:41 23:17:10 policy-db-migrator | 78 0870-FK_ToscaServiceTemplate_dataTypesName.sql upgrade 0 0800 0903242314290800u 1 2024-03-09 23:14:41 23:17:10 policy-db-migrator | 79 0880-FK_ToscaServiceTemplate_nodeTypesName.sql upgrade 0 0800 0903242314290800u 1 2024-03-09 23:14:41 23:17:10 policy-db-migrator | 80 0890-FK_ToscaServiceTemplate_policyTypesName.sql upgrade 0 0800 0903242314290800u 1 2024-03-09 23:14:41 23:17:10 policy-db-migrator | 81 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql upgrade 0 0800 0903242314290800u 1 2024-03-09 23:14:41 23:17:10 policy-db-migrator | 82 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql upgrade 0 0800 0903242314290800u 1 2024-03-09 23:14:41 23:17:10 policy-db-migrator | 83 0920-FK_ToscaTopologyTemplate_policyName.sql upgrade 0 0800 0903242314290800u 1 2024-03-09 23:14:41 23:17:10 policy-db-migrator | 84 0940-PdpPolicyStatus_PdpGroup.sql upgrade 0 0800 0903242314290800u 1 2024-03-09 23:14:41 23:17:10 policy-db-migrator | 85 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql upgrade 0 0800 0903242314290800u 1 2024-03-09 23:14:41 23:17:10 policy-db-migrator | 86 0960-FK_ToscaNodeTemplate_capabilitiesName.sql upgrade 0 0800 0903242314290800u 1 2024-03-09 23:14:41 23:17:10 policy-db-migrator | 87 0970-FK_ToscaNodeTemplate_requirementsName.sql upgrade 0 0800 0903242314290800u 1 2024-03-09 23:14:42 23:17:10 policy-db-migrator | 88 0980-FK_ToscaNodeType_requirementsName.sql upgrade 0 0800 0903242314290800u 1 2024-03-09 23:14:42 23:17:10 policy-db-migrator | 89 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql upgrade 0 0800 0903242314290800u 1 2024-03-09 23:14:42 23:17:10 policy-db-migrator | 90 1000-FK_ToscaServiceTemplate_dataTypesName.sql upgrade 0 0800 0903242314290800u 1 2024-03-09 23:14:42 23:17:10 policy-db-migrator | 91 1010-FK_ToscaServiceTemplate_nodeTypesName.sql upgrade 0 0800 0903242314290800u 1 2024-03-09 23:14:42 23:17:10 policy-db-migrator | 92 1020-FK_ToscaServiceTemplate_policyTypesName.sql upgrade 0 0800 0903242314290800u 1 2024-03-09 23:14:42 23:17:10 policy-db-migrator | 93 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql upgrade 0 0800 0903242314290800u 1 2024-03-09 23:14:42 23:17:10 policy-db-migrator | 94 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql upgrade 0 0800 0903242314290800u 1 2024-03-09 23:14:42 23:17:10 policy-db-migrator | 95 1050-FK_ToscaTopologyTemplate_policyName.sql upgrade 0 0800 0903242314290800u 1 2024-03-09 23:14:42 23:17:10 policy-db-migrator | 96 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql upgrade 0 0800 0903242314290800u 1 2024-03-09 23:14:42 23:17:10 policy-db-migrator | 97 0100-pdp.sql upgrade 0800 0900 0903242314290900u 1 2024-03-09 23:14:42 23:17:10 policy-db-migrator | 98 0110-idx_tsidx1.sql upgrade 0800 0900 0903242314290900u 1 2024-03-09 23:14:42 23:17:10 policy-db-migrator | 99 0120-pk_pdpstatistics.sql upgrade 0800 0900 0903242314290900u 1 2024-03-09 23:14:42 23:17:10 policy-db-migrator | 100 0130-pdpstatistics.sql upgrade 0800 0900 0903242314290900u 1 2024-03-09 23:14:42 23:17:10 policy-db-migrator | 101 0140-pk_pdpstatistics.sql upgrade 0800 0900 0903242314290900u 1 2024-03-09 23:14:43 23:17:10 policy-db-migrator | 102 0150-pdpstatistics.sql upgrade 0800 0900 0903242314290900u 1 2024-03-09 23:14:43 23:17:10 policy-db-migrator | 103 0160-jpapdpstatistics_enginestats.sql upgrade 0800 0900 0903242314290900u 1 2024-03-09 23:14:43 23:17:10 policy-db-migrator | 104 0170-jpapdpstatistics_enginestats.sql upgrade 0800 0900 0903242314290900u 1 2024-03-09 23:14:43 23:17:10 policy-db-migrator | 105 0180-jpapdpstatistics_enginestats.sql upgrade 0800 0900 0903242314290900u 1 2024-03-09 23:14:43 23:17:10 policy-db-migrator | 106 0190-jpapolicyaudit.sql upgrade 0800 0900 0903242314290900u 1 2024-03-09 23:14:43 23:17:10 policy-db-migrator | 107 0200-JpaPolicyAuditIndex_timestamp.sql upgrade 0800 0900 0903242314290900u 1 2024-03-09 23:14:43 23:17:10 policy-db-migrator | 108 0210-sequence.sql upgrade 0800 0900 0903242314290900u 1 2024-03-09 23:14:43 23:17:10 policy-db-migrator | 109 0220-sequence.sql upgrade 0800 0900 0903242314290900u 1 2024-03-09 23:14:43 23:17:10 policy-db-migrator | 110 0100-jpatoscapolicy_targets.sql upgrade 0900 1000 0903242314291000u 1 2024-03-09 23:14:43 23:17:10 kafka | [2024-03-09 23:15:12,428] INFO [Partition __consumer_offsets-38 broker=1] Log loaded for partition __consumer_offsets-38 with initial high watermark 0 (kafka.cluster.Partition) 23:17:10 kafka | [2024-03-09 23:15:12,428] INFO [Broker id=1] Leader __consumer_offsets-38 with topic id Some(wySMvCKtSlSRxF3RSTLuwA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,437] INFO [LogLoader partition=__consumer_offsets-8, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:10 kafka | [2024-03-09 23:15:12,438] INFO Created log for partition __consumer_offsets-8 in /var/lib/kafka/data/__consumer_offsets-8 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:10 kafka | [2024-03-09 23:15:12,438] INFO [Partition __consumer_offsets-8 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-8 (kafka.cluster.Partition) 23:17:10 kafka | [2024-03-09 23:15:12,438] INFO [Partition __consumer_offsets-8 broker=1] Log loaded for partition __consumer_offsets-8 with initial high watermark 0 (kafka.cluster.Partition) 23:17:10 kafka | [2024-03-09 23:15:12,438] INFO [Broker id=1] Leader __consumer_offsets-8 with topic id Some(wySMvCKtSlSRxF3RSTLuwA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,444] INFO [LogLoader partition=policy-pdp-pap-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:10 kafka | [2024-03-09 23:15:12,445] INFO Created log for partition policy-pdp-pap-0 in /var/lib/kafka/data/policy-pdp-pap-0 with properties {} (kafka.log.LogManager) 23:17:10 kafka | [2024-03-09 23:15:12,445] INFO [Partition policy-pdp-pap-0 broker=1] No checkpointed highwatermark is found for partition policy-pdp-pap-0 (kafka.cluster.Partition) 23:17:10 kafka | [2024-03-09 23:15:12,445] INFO [Partition policy-pdp-pap-0 broker=1] Log loaded for partition policy-pdp-pap-0 with initial high watermark 0 (kafka.cluster.Partition) 23:17:10 kafka | [2024-03-09 23:15:12,445] INFO [Broker id=1] Leader policy-pdp-pap-0 with topic id Some(_NbNQG3qR0OxU2OeMb0-vA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,452] INFO [LogLoader partition=__consumer_offsets-45, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:10 kafka | [2024-03-09 23:15:12,453] INFO Created log for partition __consumer_offsets-45 in /var/lib/kafka/data/__consumer_offsets-45 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:10 kafka | [2024-03-09 23:15:12,453] INFO [Partition __consumer_offsets-45 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-45 (kafka.cluster.Partition) 23:17:10 kafka | [2024-03-09 23:15:12,454] INFO [Partition __consumer_offsets-45 broker=1] Log loaded for partition __consumer_offsets-45 with initial high watermark 0 (kafka.cluster.Partition) 23:17:10 kafka | [2024-03-09 23:15:12,454] INFO [Broker id=1] Leader __consumer_offsets-45 with topic id Some(wySMvCKtSlSRxF3RSTLuwA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,462] INFO [LogLoader partition=__consumer_offsets-15, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:10 kafka | [2024-03-09 23:15:12,463] INFO Created log for partition __consumer_offsets-15 in /var/lib/kafka/data/__consumer_offsets-15 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:10 kafka | [2024-03-09 23:15:12,463] INFO [Partition __consumer_offsets-15 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-15 (kafka.cluster.Partition) 23:17:10 kafka | [2024-03-09 23:15:12,463] INFO [Partition __consumer_offsets-15 broker=1] Log loaded for partition __consumer_offsets-15 with initial high watermark 0 (kafka.cluster.Partition) 23:17:10 kafka | [2024-03-09 23:15:12,463] INFO [Broker id=1] Leader __consumer_offsets-15 with topic id Some(wySMvCKtSlSRxF3RSTLuwA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,470] INFO [LogLoader partition=__consumer_offsets-30, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:10 kafka | [2024-03-09 23:15:12,471] INFO Created log for partition __consumer_offsets-30 in /var/lib/kafka/data/__consumer_offsets-30 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:10 kafka | [2024-03-09 23:15:12,471] INFO [Partition __consumer_offsets-30 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-30 (kafka.cluster.Partition) 23:17:10 kafka | [2024-03-09 23:15:12,471] INFO [Partition __consumer_offsets-30 broker=1] Log loaded for partition __consumer_offsets-30 with initial high watermark 0 (kafka.cluster.Partition) 23:17:10 kafka | [2024-03-09 23:15:12,471] INFO [Broker id=1] Leader __consumer_offsets-30 with topic id Some(wySMvCKtSlSRxF3RSTLuwA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,485] INFO [LogLoader partition=__consumer_offsets-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:10 kafka | [2024-03-09 23:15:12,490] INFO Created log for partition __consumer_offsets-0 in /var/lib/kafka/data/__consumer_offsets-0 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:10 kafka | [2024-03-09 23:15:12,490] INFO [Partition __consumer_offsets-0 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-0 (kafka.cluster.Partition) 23:17:10 kafka | [2024-03-09 23:15:12,490] INFO [Partition __consumer_offsets-0 broker=1] Log loaded for partition __consumer_offsets-0 with initial high watermark 0 (kafka.cluster.Partition) 23:17:10 kafka | [2024-03-09 23:15:12,490] INFO [Broker id=1] Leader __consumer_offsets-0 with topic id Some(wySMvCKtSlSRxF3RSTLuwA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,498] INFO [LogLoader partition=__consumer_offsets-35, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:10 kafka | [2024-03-09 23:15:12,499] INFO Created log for partition __consumer_offsets-35 in /var/lib/kafka/data/__consumer_offsets-35 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:10 kafka | [2024-03-09 23:15:12,499] INFO [Partition __consumer_offsets-35 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-35 (kafka.cluster.Partition) 23:17:10 kafka | [2024-03-09 23:15:12,499] INFO [Partition __consumer_offsets-35 broker=1] Log loaded for partition __consumer_offsets-35 with initial high watermark 0 (kafka.cluster.Partition) 23:17:10 kafka | [2024-03-09 23:15:12,499] INFO [Broker id=1] Leader __consumer_offsets-35 with topic id Some(wySMvCKtSlSRxF3RSTLuwA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,506] INFO [LogLoader partition=__consumer_offsets-5, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:10 kafka | [2024-03-09 23:15:12,507] INFO Created log for partition __consumer_offsets-5 in /var/lib/kafka/data/__consumer_offsets-5 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:10 kafka | [2024-03-09 23:15:12,507] INFO [Partition __consumer_offsets-5 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-5 (kafka.cluster.Partition) 23:17:10 kafka | [2024-03-09 23:15:12,507] INFO [Partition __consumer_offsets-5 broker=1] Log loaded for partition __consumer_offsets-5 with initial high watermark 0 (kafka.cluster.Partition) 23:17:10 kafka | [2024-03-09 23:15:12,507] INFO [Broker id=1] Leader __consumer_offsets-5 with topic id Some(wySMvCKtSlSRxF3RSTLuwA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,513] INFO [LogLoader partition=__consumer_offsets-20, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:10 kafka | [2024-03-09 23:15:12,514] INFO Created log for partition __consumer_offsets-20 in /var/lib/kafka/data/__consumer_offsets-20 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:10 kafka | [2024-03-09 23:15:12,514] INFO [Partition __consumer_offsets-20 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-20 (kafka.cluster.Partition) 23:17:10 kafka | [2024-03-09 23:15:12,514] INFO [Partition __consumer_offsets-20 broker=1] Log loaded for partition __consumer_offsets-20 with initial high watermark 0 (kafka.cluster.Partition) 23:17:10 kafka | [2024-03-09 23:15:12,515] INFO [Broker id=1] Leader __consumer_offsets-20 with topic id Some(wySMvCKtSlSRxF3RSTLuwA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,521] INFO [LogLoader partition=__consumer_offsets-27, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:10 kafka | [2024-03-09 23:15:12,522] INFO Created log for partition __consumer_offsets-27 in /var/lib/kafka/data/__consumer_offsets-27 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:10 kafka | [2024-03-09 23:15:12,522] INFO [Partition __consumer_offsets-27 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-27 (kafka.cluster.Partition) 23:17:10 kafka | [2024-03-09 23:15:12,522] INFO [Partition __consumer_offsets-27 broker=1] Log loaded for partition __consumer_offsets-27 with initial high watermark 0 (kafka.cluster.Partition) 23:17:10 kafka | [2024-03-09 23:15:12,523] INFO [Broker id=1] Leader __consumer_offsets-27 with topic id Some(wySMvCKtSlSRxF3RSTLuwA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,529] INFO [LogLoader partition=__consumer_offsets-42, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:10 kafka | [2024-03-09 23:15:12,529] INFO Created log for partition __consumer_offsets-42 in /var/lib/kafka/data/__consumer_offsets-42 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:10 kafka | [2024-03-09 23:15:12,530] INFO [Partition __consumer_offsets-42 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-42 (kafka.cluster.Partition) 23:17:10 kafka | [2024-03-09 23:15:12,530] INFO [Partition __consumer_offsets-42 broker=1] Log loaded for partition __consumer_offsets-42 with initial high watermark 0 (kafka.cluster.Partition) 23:17:10 kafka | [2024-03-09 23:15:12,530] INFO [Broker id=1] Leader __consumer_offsets-42 with topic id Some(wySMvCKtSlSRxF3RSTLuwA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,536] INFO [LogLoader partition=__consumer_offsets-12, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:10 kafka | [2024-03-09 23:15:12,537] INFO Created log for partition __consumer_offsets-12 in /var/lib/kafka/data/__consumer_offsets-12 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:10 kafka | [2024-03-09 23:15:12,537] INFO [Partition __consumer_offsets-12 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-12 (kafka.cluster.Partition) 23:17:10 kafka | [2024-03-09 23:15:12,537] INFO [Partition __consumer_offsets-12 broker=1] Log loaded for partition __consumer_offsets-12 with initial high watermark 0 (kafka.cluster.Partition) 23:17:10 kafka | [2024-03-09 23:15:12,537] INFO [Broker id=1] Leader __consumer_offsets-12 with topic id Some(wySMvCKtSlSRxF3RSTLuwA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,545] INFO [LogLoader partition=__consumer_offsets-21, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:10 kafka | [2024-03-09 23:15:12,545] INFO Created log for partition __consumer_offsets-21 in /var/lib/kafka/data/__consumer_offsets-21 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:10 kafka | [2024-03-09 23:15:12,546] INFO [Partition __consumer_offsets-21 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-21 (kafka.cluster.Partition) 23:17:10 kafka | [2024-03-09 23:15:12,546] INFO [Partition __consumer_offsets-21 broker=1] Log loaded for partition __consumer_offsets-21 with initial high watermark 0 (kafka.cluster.Partition) 23:17:10 kafka | [2024-03-09 23:15:12,546] INFO [Broker id=1] Leader __consumer_offsets-21 with topic id Some(wySMvCKtSlSRxF3RSTLuwA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,559] INFO [LogLoader partition=__consumer_offsets-36, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:10 kafka | [2024-03-09 23:15:12,560] INFO Created log for partition __consumer_offsets-36 in /var/lib/kafka/data/__consumer_offsets-36 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:10 policy-db-migrator | 111 0110-jpatoscapolicytype_targets.sql upgrade 0900 1000 0903242314291000u 1 2024-03-09 23:14:43 23:17:10 policy-db-migrator | 112 0120-toscatrigger.sql upgrade 0900 1000 0903242314291000u 1 2024-03-09 23:14:43 23:17:10 policy-db-migrator | 113 0130-jpatoscapolicytype_triggers.sql upgrade 0900 1000 0903242314291000u 1 2024-03-09 23:14:43 23:17:10 policy-db-migrator | 114 0140-toscaparameter.sql upgrade 0900 1000 0903242314291000u 1 2024-03-09 23:14:43 23:17:10 policy-db-migrator | 115 0150-toscaproperty.sql upgrade 0900 1000 0903242314291000u 1 2024-03-09 23:14:43 23:17:10 policy-db-migrator | 116 0160-jpapolicyaudit_pk.sql upgrade 0900 1000 0903242314291000u 1 2024-03-09 23:14:43 23:17:10 policy-db-migrator | 117 0170-pdpstatistics_pk.sql upgrade 0900 1000 0903242314291000u 1 2024-03-09 23:14:43 23:17:10 policy-db-migrator | 118 0180-jpatoscanodetemplate_metadata.sql upgrade 0900 1000 0903242314291000u 1 2024-03-09 23:14:43 23:17:10 policy-db-migrator | 119 0100-upgrade.sql upgrade 1000 1100 0903242314291100u 1 2024-03-09 23:14:43 23:17:10 policy-db-migrator | 120 0100-jpapolicyaudit_renameuser.sql upgrade 1100 1200 0903242314291200u 1 2024-03-09 23:14:43 23:17:10 policy-db-migrator | 121 0110-idx_tsidx1.sql upgrade 1100 1200 0903242314291200u 1 2024-03-09 23:14:44 23:17:10 policy-db-migrator | 122 0120-audit_sequence.sql upgrade 1100 1200 0903242314291200u 1 2024-03-09 23:14:44 23:17:10 policy-db-migrator | 123 0130-statistics_sequence.sql upgrade 1100 1200 0903242314291200u 1 2024-03-09 23:14:44 23:17:10 policy-db-migrator | 124 0100-pdpstatistics.sql upgrade 1200 1300 0903242314291300u 1 2024-03-09 23:14:44 23:17:10 policy-db-migrator | 125 0110-jpapdpstatistics_enginestats.sql upgrade 1200 1300 0903242314291300u 1 2024-03-09 23:14:44 23:17:10 policy-db-migrator | 126 0120-statistics_sequence.sql upgrade 1200 1300 0903242314291300u 1 2024-03-09 23:14:44 23:17:10 policy-db-migrator | policyadmin: OK @ 1300 23:17:10 kafka | [2024-03-09 23:15:12,560] INFO [Partition __consumer_offsets-36 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-36 (kafka.cluster.Partition) 23:17:10 kafka | [2024-03-09 23:15:12,560] INFO [Partition __consumer_offsets-36 broker=1] Log loaded for partition __consumer_offsets-36 with initial high watermark 0 (kafka.cluster.Partition) 23:17:10 kafka | [2024-03-09 23:15:12,560] INFO [Broker id=1] Leader __consumer_offsets-36 with topic id Some(wySMvCKtSlSRxF3RSTLuwA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,569] INFO [LogLoader partition=__consumer_offsets-6, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:10 kafka | [2024-03-09 23:15:12,570] INFO Created log for partition __consumer_offsets-6 in /var/lib/kafka/data/__consumer_offsets-6 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:10 kafka | [2024-03-09 23:15:12,570] INFO [Partition __consumer_offsets-6 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-6 (kafka.cluster.Partition) 23:17:10 kafka | [2024-03-09 23:15:12,570] INFO [Partition __consumer_offsets-6 broker=1] Log loaded for partition __consumer_offsets-6 with initial high watermark 0 (kafka.cluster.Partition) 23:17:10 kafka | [2024-03-09 23:15:12,570] INFO [Broker id=1] Leader __consumer_offsets-6 with topic id Some(wySMvCKtSlSRxF3RSTLuwA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,578] INFO [LogLoader partition=__consumer_offsets-43, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:10 kafka | [2024-03-09 23:15:12,579] INFO Created log for partition __consumer_offsets-43 in /var/lib/kafka/data/__consumer_offsets-43 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:10 kafka | [2024-03-09 23:15:12,579] INFO [Partition __consumer_offsets-43 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-43 (kafka.cluster.Partition) 23:17:10 kafka | [2024-03-09 23:15:12,579] INFO [Partition __consumer_offsets-43 broker=1] Log loaded for partition __consumer_offsets-43 with initial high watermark 0 (kafka.cluster.Partition) 23:17:10 kafka | [2024-03-09 23:15:12,579] INFO [Broker id=1] Leader __consumer_offsets-43 with topic id Some(wySMvCKtSlSRxF3RSTLuwA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,585] INFO [LogLoader partition=__consumer_offsets-13, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:10 kafka | [2024-03-09 23:15:12,585] INFO Created log for partition __consumer_offsets-13 in /var/lib/kafka/data/__consumer_offsets-13 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:10 kafka | [2024-03-09 23:15:12,585] INFO [Partition __consumer_offsets-13 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-13 (kafka.cluster.Partition) 23:17:10 kafka | [2024-03-09 23:15:12,585] INFO [Partition __consumer_offsets-13 broker=1] Log loaded for partition __consumer_offsets-13 with initial high watermark 0 (kafka.cluster.Partition) 23:17:10 kafka | [2024-03-09 23:15:12,585] INFO [Broker id=1] Leader __consumer_offsets-13 with topic id Some(wySMvCKtSlSRxF3RSTLuwA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,595] INFO [LogLoader partition=__consumer_offsets-28, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:17:10 kafka | [2024-03-09 23:15:12,596] INFO Created log for partition __consumer_offsets-28 in /var/lib/kafka/data/__consumer_offsets-28 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:17:10 kafka | [2024-03-09 23:15:12,596] INFO [Partition __consumer_offsets-28 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-28 (kafka.cluster.Partition) 23:17:10 kafka | [2024-03-09 23:15:12,596] INFO [Partition __consumer_offsets-28 broker=1] Log loaded for partition __consumer_offsets-28 with initial high watermark 0 (kafka.cluster.Partition) 23:17:10 kafka | [2024-03-09 23:15:12,596] INFO [Broker id=1] Leader __consumer_offsets-28 with topic id Some(wySMvCKtSlSRxF3RSTLuwA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,602] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-3 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,602] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-18 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,602] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-41 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,602] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-10 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,602] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-33 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,602] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-48 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,602] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-19 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,602] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-34 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,602] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-4 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,602] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-11 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,602] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-26 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,602] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-49 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,602] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-39 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,602] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-9 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,602] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-24 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,602] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-31 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,602] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-46 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,602] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-1 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,602] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-16 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,602] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-2 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,602] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-25 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,602] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-40 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,602] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-47 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,602] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-17 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,602] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-32 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,602] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-37 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,602] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-7 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,602] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-22 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,602] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-29 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,602] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-44 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,602] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-14 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,603] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-23 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,603] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-38 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,603] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-8 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,603] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,603] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-45 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,603] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-15 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,603] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-30 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,603] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-0 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,603] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-35 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,603] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-5 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,603] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-20 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,603] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-27 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,603] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-42 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,603] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-12 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,603] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-21 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,603] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-36 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,603] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-6 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,603] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-43 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,603] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-13 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,603] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-28 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,621] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 3 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:10 kafka | [2024-03-09 23:15:12,625] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-3 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:10 kafka | [2024-03-09 23:15:12,627] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 18 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:10 kafka | [2024-03-09 23:15:12,627] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-18 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:10 kafka | [2024-03-09 23:15:12,627] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 41 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:10 kafka | [2024-03-09 23:15:12,627] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-41 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:10 kafka | [2024-03-09 23:15:12,627] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 10 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:10 kafka | [2024-03-09 23:15:12,627] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-10 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:10 kafka | [2024-03-09 23:15:12,628] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 33 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:10 kafka | [2024-03-09 23:15:12,628] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-33 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:10 kafka | [2024-03-09 23:15:12,628] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 48 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:10 kafka | [2024-03-09 23:15:12,628] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-48 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:10 kafka | [2024-03-09 23:15:12,628] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 19 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:10 kafka | [2024-03-09 23:15:12,628] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-19 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:10 kafka | [2024-03-09 23:15:12,628] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 34 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:10 kafka | [2024-03-09 23:15:12,628] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-34 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:10 kafka | [2024-03-09 23:15:12,628] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 4 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:10 kafka | [2024-03-09 23:15:12,628] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-4 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:10 kafka | [2024-03-09 23:15:12,628] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 11 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:10 kafka | [2024-03-09 23:15:12,628] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-11 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:10 kafka | [2024-03-09 23:15:12,628] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 26 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:10 kafka | [2024-03-09 23:15:12,628] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-26 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:10 kafka | [2024-03-09 23:15:12,629] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 49 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:10 kafka | [2024-03-09 23:15:12,629] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-49 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:10 kafka | [2024-03-09 23:15:12,629] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 39 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:10 kafka | [2024-03-09 23:15:12,629] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-39 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:10 kafka | [2024-03-09 23:15:12,629] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 9 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:10 kafka | [2024-03-09 23:15:12,629] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-9 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:10 kafka | [2024-03-09 23:15:12,629] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 24 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:10 kafka | [2024-03-09 23:15:12,629] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-24 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:10 kafka | [2024-03-09 23:15:12,629] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 31 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:10 kafka | [2024-03-09 23:15:12,629] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-31 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:10 kafka | [2024-03-09 23:15:12,629] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 46 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:10 kafka | [2024-03-09 23:15:12,629] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-46 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:10 kafka | [2024-03-09 23:15:12,630] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 1 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:10 kafka | [2024-03-09 23:15:12,630] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-1 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:10 kafka | [2024-03-09 23:15:12,630] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 16 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:10 kafka | [2024-03-09 23:15:12,630] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-16 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:10 kafka | [2024-03-09 23:15:12,630] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 2 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:10 kafka | [2024-03-09 23:15:12,630] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-2 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:10 kafka | [2024-03-09 23:15:12,630] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 25 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:10 kafka | [2024-03-09 23:15:12,630] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-25 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:10 kafka | [2024-03-09 23:15:12,630] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 40 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:10 kafka | [2024-03-09 23:15:12,630] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-40 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:10 kafka | [2024-03-09 23:15:12,630] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 47 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:10 kafka | [2024-03-09 23:15:12,630] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-47 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:10 kafka | [2024-03-09 23:15:12,631] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 17 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:10 kafka | [2024-03-09 23:15:12,631] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-17 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:10 kafka | [2024-03-09 23:15:12,631] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 32 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:10 kafka | [2024-03-09 23:15:12,631] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-32 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:10 kafka | [2024-03-09 23:15:12,631] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 37 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:10 kafka | [2024-03-09 23:15:12,631] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-37 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:10 kafka | [2024-03-09 23:15:12,631] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 7 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:10 kafka | [2024-03-09 23:15:12,631] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-7 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:10 kafka | [2024-03-09 23:15:12,631] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 22 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:10 kafka | [2024-03-09 23:15:12,631] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-22 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:10 kafka | [2024-03-09 23:15:12,631] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 29 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:10 kafka | [2024-03-09 23:15:12,632] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-29 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:10 kafka | [2024-03-09 23:15:12,632] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 44 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:10 kafka | [2024-03-09 23:15:12,632] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-44 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:10 kafka | [2024-03-09 23:15:12,632] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 14 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:10 kafka | [2024-03-09 23:15:12,632] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-14 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:10 kafka | [2024-03-09 23:15:12,632] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 23 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:10 kafka | [2024-03-09 23:15:12,632] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-23 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:10 kafka | [2024-03-09 23:15:12,632] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 38 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:10 kafka | [2024-03-09 23:15:12,632] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-38 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:10 kafka | [2024-03-09 23:15:12,632] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 8 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:10 kafka | [2024-03-09 23:15:12,632] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-8 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:10 kafka | [2024-03-09 23:15:12,633] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 45 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:10 kafka | [2024-03-09 23:15:12,633] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-45 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:10 kafka | [2024-03-09 23:15:12,633] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 15 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:10 kafka | [2024-03-09 23:15:12,633] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-15 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:10 kafka | [2024-03-09 23:15:12,633] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 30 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:10 kafka | [2024-03-09 23:15:12,633] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-30 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:10 kafka | [2024-03-09 23:15:12,633] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 0 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:10 kafka | [2024-03-09 23:15:12,633] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-0 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:10 kafka | [2024-03-09 23:15:12,633] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 35 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:10 kafka | [2024-03-09 23:15:12,633] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-35 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:10 kafka | [2024-03-09 23:15:12,633] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 5 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:10 kafka | [2024-03-09 23:15:12,633] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-5 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:10 kafka | [2024-03-09 23:15:12,634] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 20 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:10 kafka | [2024-03-09 23:15:12,634] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-20 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:10 kafka | [2024-03-09 23:15:12,634] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 27 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:10 kafka | [2024-03-09 23:15:12,634] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-27 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:10 kafka | [2024-03-09 23:15:12,634] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 42 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:10 kafka | [2024-03-09 23:15:12,634] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-42 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:10 kafka | [2024-03-09 23:15:12,634] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 12 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:10 kafka | [2024-03-09 23:15:12,634] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-12 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:10 kafka | [2024-03-09 23:15:12,634] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 21 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:10 kafka | [2024-03-09 23:15:12,634] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-21 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:10 kafka | [2024-03-09 23:15:12,634] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 36 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:10 kafka | [2024-03-09 23:15:12,635] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-36 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:10 kafka | [2024-03-09 23:15:12,635] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 6 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:10 kafka | [2024-03-09 23:15:12,635] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-6 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:10 kafka | [2024-03-09 23:15:12,635] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 43 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:10 kafka | [2024-03-09 23:15:12,635] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-43 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:10 kafka | [2024-03-09 23:15:12,635] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 13 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:10 kafka | [2024-03-09 23:15:12,635] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-13 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:10 kafka | [2024-03-09 23:15:12,635] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 28 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:17:10 kafka | [2024-03-09 23:15:12,635] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-28 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:17:10 kafka | [2024-03-09 23:15:12,636] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-3 in 10 milliseconds for epoch 0, of which 4 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:10 kafka | [2024-03-09 23:15:12,639] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-18 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:10 kafka | [2024-03-09 23:15:12,639] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-41 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:10 kafka | [2024-03-09 23:15:12,639] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-10 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:10 kafka | [2024-03-09 23:15:12,639] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-33 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:10 kafka | [2024-03-09 23:15:12,639] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-48 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:10 kafka | [2024-03-09 23:15:12,639] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-19 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:10 kafka | [2024-03-09 23:15:12,640] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-34 in 12 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:10 kafka | [2024-03-09 23:15:12,640] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-4 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:10 kafka | [2024-03-09 23:15:12,640] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-11 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:10 kafka | [2024-03-09 23:15:12,640] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-26 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:10 kafka | [2024-03-09 23:15:12,640] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-49 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:10 kafka | [2024-03-09 23:15:12,640] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-39 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:10 kafka | [2024-03-09 23:15:12,640] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-9 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:10 kafka | [2024-03-09 23:15:12,640] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-24 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:10 kafka | [2024-03-09 23:15:12,640] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-31 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:10 kafka | [2024-03-09 23:15:12,640] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-46 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:10 kafka | [2024-03-09 23:15:12,640] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-1 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:10 kafka | [2024-03-09 23:15:12,641] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-16 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:10 kafka | [2024-03-09 23:15:12,641] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-2 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:10 kafka | [2024-03-09 23:15:12,641] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-25 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:10 kafka | [2024-03-09 23:15:12,641] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-40 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:10 kafka | [2024-03-09 23:15:12,641] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-47 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:10 kafka | [2024-03-09 23:15:12,641] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-17 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:10 kafka | [2024-03-09 23:15:12,641] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-32 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:10 kafka | [2024-03-09 23:15:12,641] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-37 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:10 kafka | [2024-03-09 23:15:12,641] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-7 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:10 kafka | [2024-03-09 23:15:12,641] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-22 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:10 kafka | [2024-03-09 23:15:12,642] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-29 in 10 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:10 kafka | [2024-03-09 23:15:12,642] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-44 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:10 kafka | [2024-03-09 23:15:12,642] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-14 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:10 kafka | [2024-03-09 23:15:12,642] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-23 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:10 kafka | [2024-03-09 23:15:12,642] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-38 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:10 kafka | [2024-03-09 23:15:12,642] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-8 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:10 kafka | [2024-03-09 23:15:12,642] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-45 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:10 kafka | [2024-03-09 23:15:12,642] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-15 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:10 kafka | [2024-03-09 23:15:12,642] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-30 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:10 kafka | [2024-03-09 23:15:12,642] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-0 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:10 kafka | [2024-03-09 23:15:12,642] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-35 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:10 kafka | [2024-03-09 23:15:12,643] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-5 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:10 kafka | [2024-03-09 23:15:12,643] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-20 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:10 kafka | [2024-03-09 23:15:12,643] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-27 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:10 kafka | [2024-03-09 23:15:12,643] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-42 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:10 kafka | [2024-03-09 23:15:12,643] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-12 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:10 kafka | [2024-03-09 23:15:12,643] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-21 in 9 milliseconds for epoch 0, of which 9 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:10 kafka | [2024-03-09 23:15:12,643] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-36 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:10 kafka | [2024-03-09 23:15:12,643] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-6 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:10 kafka | [2024-03-09 23:15:12,643] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-43 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:10 kafka | [2024-03-09 23:15:12,643] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-13 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:10 kafka | [2024-03-09 23:15:12,644] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-28 in 8 milliseconds for epoch 0, of which 8 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:17:10 kafka | [2024-03-09 23:15:12,646] INFO [Broker id=1] Finished LeaderAndIsr request in 664ms correlationId 1 from controller 1 for 51 partitions (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,650] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=wySMvCKtSlSRxF3RSTLuwA, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=13, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=46, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=9, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=42, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=21, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=17, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=30, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=26, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=5, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=38, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=1, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=34, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=16, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=45, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=12, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=41, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=24, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=20, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=49, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=29, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=25, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=8, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=37, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=4, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=33, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=15, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=48, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=11, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=44, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=23, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=19, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=32, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=28, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=7, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=40, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=3, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=36, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=47, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=14, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=43, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=10, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=22, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=18, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=31, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=27, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=39, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=6, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=35, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=2, errorCode=0)]), LeaderAndIsrTopicError(topicId=_NbNQG3qR0OxU2OeMb0-vA, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 1 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,658] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition policy-pdp-pap-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,658] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-13 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,658] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-46 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,658] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-9 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,658] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-42 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,658] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-21 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,658] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-17 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,659] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-30 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,659] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-26 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,659] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-5 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,659] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-38 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,659] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-1 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,659] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-34 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,659] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-16 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,660] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-45 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,660] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-12 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,660] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-41 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,660] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-24 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,660] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-20 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,661] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-49 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,661] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,661] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-29 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,661] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-25 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,661] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-8 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,661] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-37 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,661] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-4 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,661] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-33 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,662] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-15 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,662] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-48 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,662] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-11 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,662] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-44 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,662] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-23 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,662] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-19 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,662] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-32 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,663] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-28 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,663] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-7 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,663] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-40 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,663] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-3 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,663] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-36 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,663] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-47 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,663] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-14 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,663] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-43 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,663] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-10 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,663] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-22 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,664] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-18 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,664] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-31 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,664] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-27 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,664] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-39 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,664] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-6 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,664] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-35 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,664] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-2 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,665] INFO [Broker id=1] Add 51 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,666] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 2 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 23:17:10 kafka | [2024-03-09 23:15:12,750] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group policy-pap in Empty state. Created a new member id consumer-policy-pap-4-e0fe26bd-6bcb-4009-9bd7-a5712ddcf51d and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 23:17:10 kafka | [2024-03-09 23:15:12,758] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group 99d6a240-42a3-48b7-904b-df55de280eab in Empty state. Created a new member id consumer-99d6a240-42a3-48b7-904b-df55de280eab-3-a4e50710-7734-4868-9a53-8dcca5787c23 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 23:17:10 kafka | [2024-03-09 23:15:12,788] INFO [GroupCoordinator 1]: Preparing to rebalance group 99d6a240-42a3-48b7-904b-df55de280eab in state PreparingRebalance with old generation 0 (__consumer_offsets-22) (reason: Adding new member consumer-99d6a240-42a3-48b7-904b-df55de280eab-3-a4e50710-7734-4868-9a53-8dcca5787c23 with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) 23:17:10 kafka | [2024-03-09 23:15:12,794] INFO [GroupCoordinator 1]: Preparing to rebalance group policy-pap in state PreparingRebalance with old generation 0 (__consumer_offsets-24) (reason: Adding new member consumer-policy-pap-4-e0fe26bd-6bcb-4009-9bd7-a5712ddcf51d with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) 23:17:10 kafka | [2024-03-09 23:15:13,653] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group dbe26ca2-4841-4841-92e1-919ee240973d in Empty state. Created a new member id consumer-dbe26ca2-4841-4841-92e1-919ee240973d-2-b749b30d-3583-4260-a790-ab3479b00a8b and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 23:17:10 kafka | [2024-03-09 23:15:13,658] INFO [GroupCoordinator 1]: Preparing to rebalance group dbe26ca2-4841-4841-92e1-919ee240973d in state PreparingRebalance with old generation 0 (__consumer_offsets-14) (reason: Adding new member consumer-dbe26ca2-4841-4841-92e1-919ee240973d-2-b749b30d-3583-4260-a790-ab3479b00a8b with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) 23:17:10 kafka | [2024-03-09 23:15:15,800] INFO [GroupCoordinator 1]: Stabilized group 99d6a240-42a3-48b7-904b-df55de280eab generation 1 (__consumer_offsets-22) with 1 members (kafka.coordinator.group.GroupCoordinator) 23:17:10 kafka | [2024-03-09 23:15:15,804] INFO [GroupCoordinator 1]: Stabilized group policy-pap generation 1 (__consumer_offsets-24) with 1 members (kafka.coordinator.group.GroupCoordinator) 23:17:10 kafka | [2024-03-09 23:15:15,826] INFO [GroupCoordinator 1]: Assignment received from leader consumer-policy-pap-4-e0fe26bd-6bcb-4009-9bd7-a5712ddcf51d for group policy-pap for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 23:17:10 kafka | [2024-03-09 23:15:15,826] INFO [GroupCoordinator 1]: Assignment received from leader consumer-99d6a240-42a3-48b7-904b-df55de280eab-3-a4e50710-7734-4868-9a53-8dcca5787c23 for group 99d6a240-42a3-48b7-904b-df55de280eab for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 23:17:10 kafka | [2024-03-09 23:15:16,660] INFO [GroupCoordinator 1]: Stabilized group dbe26ca2-4841-4841-92e1-919ee240973d generation 1 (__consumer_offsets-14) with 1 members (kafka.coordinator.group.GroupCoordinator) 23:17:10 kafka | [2024-03-09 23:15:16,684] INFO [GroupCoordinator 1]: Assignment received from leader consumer-dbe26ca2-4841-4841-92e1-919ee240973d-2-b749b30d-3583-4260-a790-ab3479b00a8b for group dbe26ca2-4841-4841-92e1-919ee240973d for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 23:17:10 ++ echo 'Tearing down containers...' 23:17:10 Tearing down containers... 23:17:10 ++ docker-compose down -v --remove-orphans 23:17:11 Stopping policy-apex-pdp ... 23:17:11 Stopping policy-pap ... 23:17:11 Stopping policy-api ... 23:17:11 Stopping kafka ... 23:17:11 Stopping grafana ... 23:17:11 Stopping compose_zookeeper_1 ... 23:17:11 Stopping simulator ... 23:17:11 Stopping prometheus ... 23:17:11 Stopping mariadb ... 23:17:11 Stopping grafana ... done 23:17:11 Stopping prometheus ... done 23:17:21 Stopping policy-apex-pdp ... done 23:17:31 Stopping simulator ... done 23:17:31 Stopping policy-pap ... done 23:17:32 Stopping mariadb ... done 23:17:32 Stopping kafka ... done 23:17:33 Stopping compose_zookeeper_1 ... done 23:17:42 Stopping policy-api ... done 23:17:42 Removing policy-apex-pdp ... 23:17:42 Removing policy-pap ... 23:17:42 Removing policy-api ... 23:17:42 Removing kafka ... 23:17:42 Removing policy-db-migrator ... 23:17:42 Removing grafana ... 23:17:42 Removing compose_zookeeper_1 ... 23:17:42 Removing simulator ... 23:17:42 Removing prometheus ... 23:17:42 Removing mariadb ... 23:17:42 Removing policy-apex-pdp ... done 23:17:42 Removing simulator ... done 23:17:42 Removing policy-api ... done 23:17:42 Removing policy-pap ... done 23:17:42 Removing kafka ... done 23:17:42 Removing compose_zookeeper_1 ... done 23:17:42 Removing grafana ... done 23:17:42 Removing policy-db-migrator ... done 23:17:42 Removing mariadb ... done 23:17:42 Removing prometheus ... done 23:17:42 Removing network compose_default 23:17:42 ++ cd /w/workspace/policy-pap-master-project-csit-pap 23:17:42 + load_set 23:17:42 + _setopts=hxB 23:17:42 ++ echo braceexpand:hashall:interactive-comments:xtrace 23:17:42 ++ tr : ' ' 23:17:42 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:17:42 + set +o braceexpand 23:17:42 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:17:42 + set +o hashall 23:17:42 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:17:42 + set +o interactive-comments 23:17:42 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:17:42 + set +o xtrace 23:17:42 ++ echo hxB 23:17:42 ++ sed 's/./& /g' 23:17:42 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:17:42 + set +h 23:17:42 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:17:42 + set +x 23:17:42 + [[ -n /tmp/tmp.esLYy6xppR ]] 23:17:42 + rsync -av /tmp/tmp.esLYy6xppR/ /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap 23:17:42 sending incremental file list 23:17:42 ./ 23:17:42 log.html 23:17:42 output.xml 23:17:42 report.html 23:17:42 testplan.txt 23:17:42 23:17:42 sent 919,204 bytes received 95 bytes 1,838,598.00 bytes/sec 23:17:42 total size is 918,662 speedup is 1.00 23:17:42 + rm -rf /w/workspace/policy-pap-master-project-csit-pap/models 23:17:42 + exit 0 23:17:42 $ ssh-agent -k 23:17:42 unset SSH_AUTH_SOCK; 23:17:42 unset SSH_AGENT_PID; 23:17:42 echo Agent pid 2090 killed; 23:17:42 [ssh-agent] Stopped. 23:17:42 Robot results publisher started... 23:17:42 INFO: Checking test criticality is deprecated and will be dropped in a future release! 23:17:42 -Parsing output xml: 23:17:43 Done! 23:17:43 WARNING! Could not find file: **/log.html 23:17:43 WARNING! Could not find file: **/report.html 23:17:43 -Copying log files to build dir: 23:17:43 Done! 23:17:43 -Assigning results to build: 23:17:43 Done! 23:17:43 -Checking thresholds: 23:17:43 Done! 23:17:43 Done publishing Robot results. 23:17:43 [PostBuildScript] - [INFO] Executing post build scripts. 23:17:43 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins1520006438731096148.sh 23:17:43 ---> sysstat.sh 23:17:43 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins2097949405357777153.sh 23:17:43 ---> package-listing.sh 23:17:43 ++ facter osfamily 23:17:43 ++ tr '[:upper:]' '[:lower:]' 23:17:44 + OS_FAMILY=debian 23:17:44 + workspace=/w/workspace/policy-pap-master-project-csit-pap 23:17:44 + START_PACKAGES=/tmp/packages_start.txt 23:17:44 + END_PACKAGES=/tmp/packages_end.txt 23:17:44 + DIFF_PACKAGES=/tmp/packages_diff.txt 23:17:44 + PACKAGES=/tmp/packages_start.txt 23:17:44 + '[' /w/workspace/policy-pap-master-project-csit-pap ']' 23:17:44 + PACKAGES=/tmp/packages_end.txt 23:17:44 + case "${OS_FAMILY}" in 23:17:44 + dpkg -l 23:17:44 + grep '^ii' 23:17:44 + '[' -f /tmp/packages_start.txt ']' 23:17:44 + '[' -f /tmp/packages_end.txt ']' 23:17:44 + diff /tmp/packages_start.txt /tmp/packages_end.txt 23:17:44 + '[' /w/workspace/policy-pap-master-project-csit-pap ']' 23:17:44 + mkdir -p /w/workspace/policy-pap-master-project-csit-pap/archives/ 23:17:44 + cp -f /tmp/packages_diff.txt /tmp/packages_end.txt /tmp/packages_start.txt /w/workspace/policy-pap-master-project-csit-pap/archives/ 23:17:44 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins14876576802821947583.sh 23:17:44 ---> capture-instance-metadata.sh 23:17:44 Setup pyenv: 23:17:44 system 23:17:44 3.8.13 23:17:44 3.9.13 23:17:44 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) 23:17:44 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-T1G5 from file:/tmp/.os_lf_venv 23:17:45 lf-activate-venv(): INFO: Installing: lftools 23:17:55 lf-activate-venv(): INFO: Adding /tmp/venv-T1G5/bin to PATH 23:17:55 INFO: Running in OpenStack, capturing instance metadata 23:17:55 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins12771125826161514891.sh 23:17:55 provisioning config files... 23:17:55 copy managed file [jenkins-log-archives-settings] to file:/w/workspace/policy-pap-master-project-csit-pap@tmp/config7852530890289550127tmp 23:17:55 Regular expression run condition: Expression=[^.*logs-s3.*], Label=[] 23:17:55 Run condition [Regular expression match] preventing perform for step [Provide Configuration files] 23:17:55 [EnvInject] - Injecting environment variables from a build step. 23:17:55 [EnvInject] - Injecting as environment variables the properties content 23:17:55 SERVER_ID=logs 23:17:55 23:17:55 [EnvInject] - Variables injected successfully. 23:17:55 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins4577373116850803749.sh 23:17:55 ---> create-netrc.sh 23:17:55 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins1829916151871614772.sh 23:17:55 ---> python-tools-install.sh 23:17:55 Setup pyenv: 23:17:55 system 23:17:55 3.8.13 23:17:55 3.9.13 23:17:55 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) 23:17:55 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-T1G5 from file:/tmp/.os_lf_venv 23:17:57 lf-activate-venv(): INFO: Installing: lftools 23:18:05 lf-activate-venv(): INFO: Adding /tmp/venv-T1G5/bin to PATH 23:18:05 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins7549710071565726269.sh 23:18:05 ---> sudo-logs.sh 23:18:05 Archiving 'sudo' log.. 23:18:05 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins1281330978542247866.sh 23:18:05 ---> job-cost.sh 23:18:05 Setup pyenv: 23:18:05 system 23:18:05 3.8.13 23:18:05 3.9.13 23:18:05 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) 23:18:05 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-T1G5 from file:/tmp/.os_lf_venv 23:18:07 lf-activate-venv(): INFO: Installing: zipp==1.1.0 python-openstackclient urllib3~=1.26.15 23:18:11 lf-activate-venv(): INFO: Adding /tmp/venv-T1G5/bin to PATH 23:18:11 INFO: No Stack... 23:18:12 INFO: Retrieving Pricing Info for: v3-standard-8 23:18:12 INFO: Archiving Costs 23:18:12 [policy-pap-master-project-csit-pap] $ /bin/bash -l /tmp/jenkins8997902234315219421.sh 23:18:12 ---> logs-deploy.sh 23:18:12 Setup pyenv: 23:18:12 system 23:18:12 3.8.13 23:18:12 3.9.13 23:18:12 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) 23:18:12 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-T1G5 from file:/tmp/.os_lf_venv 23:18:13 lf-activate-venv(): INFO: Installing: lftools 23:18:22 lf-activate-venv(): INFO: Adding /tmp/venv-T1G5/bin to PATH 23:18:22 INFO: Nexus URL https://nexus.onap.org path production/vex-yul-ecomp-jenkins-1/policy-pap-master-project-csit-pap/1605 23:18:22 INFO: archiving workspace using pattern(s): -p **/target/surefire-reports/*-output.txt 23:18:23 Archives upload complete. 23:18:23 INFO: archiving logs to Nexus 23:18:24 ---> uname -a: 23:18:24 Linux prd-ubuntu1804-docker-8c-8g-12229 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux 23:18:24 23:18:24 23:18:24 ---> lscpu: 23:18:24 Architecture: x86_64 23:18:24 CPU op-mode(s): 32-bit, 64-bit 23:18:24 Byte Order: Little Endian 23:18:24 CPU(s): 8 23:18:24 On-line CPU(s) list: 0-7 23:18:24 Thread(s) per core: 1 23:18:24 Core(s) per socket: 1 23:18:24 Socket(s): 8 23:18:24 NUMA node(s): 1 23:18:24 Vendor ID: AuthenticAMD 23:18:24 CPU family: 23 23:18:24 Model: 49 23:18:24 Model name: AMD EPYC-Rome Processor 23:18:24 Stepping: 0 23:18:24 CPU MHz: 2800.000 23:18:24 BogoMIPS: 5600.00 23:18:24 Virtualization: AMD-V 23:18:24 Hypervisor vendor: KVM 23:18:24 Virtualization type: full 23:18:24 L1d cache: 32K 23:18:24 L1i cache: 32K 23:18:24 L2 cache: 512K 23:18:24 L3 cache: 16384K 23:18:24 NUMA node0 CPU(s): 0-7 23:18:24 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl xtopology cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr arat npt nrip_save umip rdpid arch_capabilities 23:18:24 23:18:24 23:18:24 ---> nproc: 23:18:24 8 23:18:24 23:18:24 23:18:24 ---> df -h: 23:18:24 Filesystem Size Used Avail Use% Mounted on 23:18:24 udev 16G 0 16G 0% /dev 23:18:24 tmpfs 3.2G 708K 3.2G 1% /run 23:18:24 /dev/vda1 155G 14G 142G 9% / 23:18:24 tmpfs 16G 0 16G 0% /dev/shm 23:18:24 tmpfs 5.0M 0 5.0M 0% /run/lock 23:18:24 tmpfs 16G 0 16G 0% /sys/fs/cgroup 23:18:24 /dev/vda15 105M 4.4M 100M 5% /boot/efi 23:18:24 tmpfs 3.2G 0 3.2G 0% /run/user/1001 23:18:24 23:18:24 23:18:24 ---> free -m: 23:18:24 total used free shared buff/cache available 23:18:24 Mem: 32167 862 25078 0 6226 30849 23:18:24 Swap: 1023 0 1023 23:18:24 23:18:24 23:18:24 ---> ip addr: 23:18:24 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 23:18:24 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 23:18:24 inet 127.0.0.1/8 scope host lo 23:18:24 valid_lft forever preferred_lft forever 23:18:24 inet6 ::1/128 scope host 23:18:24 valid_lft forever preferred_lft forever 23:18:24 2: ens3: mtu 1458 qdisc mq state UP group default qlen 1000 23:18:24 link/ether fa:16:3e:6c:85:cf brd ff:ff:ff:ff:ff:ff 23:18:24 inet 10.30.107.200/23 brd 10.30.107.255 scope global dynamic ens3 23:18:24 valid_lft 85916sec preferred_lft 85916sec 23:18:24 inet6 fe80::f816:3eff:fe6c:85cf/64 scope link 23:18:24 valid_lft forever preferred_lft forever 23:18:24 3: docker0: mtu 1500 qdisc noqueue state DOWN group default 23:18:24 link/ether 02:42:42:ca:08:53 brd ff:ff:ff:ff:ff:ff 23:18:24 inet 10.250.0.254/24 brd 10.250.0.255 scope global docker0 23:18:24 valid_lft forever preferred_lft forever 23:18:24 23:18:24 23:18:24 ---> sar -b -r -n DEV: 23:18:24 Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-12229) 03/09/24 _x86_64_ (8 CPU) 23:18:24 23:18:24 23:10:24 LINUX RESTART (8 CPU) 23:18:24 23:18:24 23:11:01 tps rtps wtps bread/s bwrtn/s 23:18:24 23:12:01 103.22 22.80 80.42 1178.47 21273.52 23:18:24 23:13:01 129.91 23.23 106.68 2790.47 27905.88 23:18:24 23:14:01 247.42 2.82 244.60 398.67 143064.18 23:18:24 23:15:01 290.02 9.22 280.80 385.70 23603.72 23:18:24 23:16:01 23.71 0.38 23.33 28.93 14375.35 23:18:24 23:17:01 13.13 0.02 13.11 0.13 14069.89 23:18:24 23:18:01 69.57 1.38 68.19 114.78 16582.47 23:18:24 Average: 125.28 8.55 116.73 699.57 37269.83 23:18:24 23:18:24 23:11:01 kbmemfree kbavail kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty 23:18:24 23:12:01 30075144 31684556 2864076 8.70 71192 1848540 1434468 4.22 890940 1682256 167312 23:18:24 23:13:01 28089508 31657840 4849712 14.72 110144 3699076 1428376 4.20 1005588 3436108 1640748 23:18:24 23:14:01 25662120 31574556 7277100 22.09 141248 5897772 3027892 8.91 1116388 5625748 944 23:18:24 23:15:01 23817136 29878688 9122084 27.69 156040 6015008 8769528 25.80 2960104 5566768 420 23:18:24 23:16:01 23462188 29529956 9477032 28.77 157552 6017276 9077120 26.71 3346104 5526472 528 23:18:24 23:17:01 23440068 29508688 9499152 28.84 157712 6017856 9059856 26.66 3365508 5526452 232 23:18:24 23:18:01 25661752 31568800 7277468 22.09 159304 5873320 1551172 4.56 1352724 5384288 26276 23:18:24 Average: 25743988 30771869 7195232 21.84 136170 5052693 4906916 14.44 2005337 4678299 262351 23:18:24 23:18:24 23:11:01 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil 23:18:24 23:12:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:18:24 23:12:01 ens3 63.44 43.43 1014.00 8.44 0.00 0.00 0.00 0.00 23:18:24 23:12:01 lo 1.67 1.67 0.18 0.18 0.00 0.00 0.00 0.00 23:18:24 23:13:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:18:24 23:13:01 ens3 342.84 208.32 10605.34 18.96 0.00 0.00 0.00 0.00 23:18:24 23:13:01 br-c04f62967f40 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:18:24 23:13:01 lo 7.80 7.80 0.72 0.72 0.00 0.00 0.00 0.00 23:18:24 23:14:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:18:24 23:14:01 veth95a5415 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:18:24 23:14:01 vethd01dadd 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:18:24 23:14:01 ens3 830.32 400.47 21588.72 29.52 0.00 0.00 0.00 0.00 23:18:24 23:15:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:18:24 23:15:01 veth9b10b15 0.48 0.83 0.05 0.31 0.00 0.00 0.00 0.00 23:18:24 23:15:01 veth95a5415 0.15 0.58 0.01 0.03 0.00 0.00 0.00 0.00 23:18:24 23:15:01 vethd01dadd 0.00 0.38 0.00 0.02 0.00 0.00 0.00 0.00 23:18:24 23:16:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:18:24 23:16:01 veth9b10b15 0.15 0.18 0.01 0.01 0.00 0.00 0.00 0.00 23:18:24 23:16:01 veth95a5415 0.52 0.50 0.05 1.43 0.00 0.00 0.00 0.00 23:18:24 23:16:01 vethd01dadd 0.00 0.03 0.00 0.00 0.00 0.00 0.00 0.00 23:18:24 23:17:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:18:24 23:17:01 veth9b10b15 0.17 0.13 0.01 0.01 0.00 0.00 0.00 0.00 23:18:24 23:17:01 veth95a5415 0.57 0.58 0.05 1.51 0.00 0.00 0.00 0.00 23:18:24 23:17:01 vethd01dadd 0.00 0.02 0.00 0.00 0.00 0.00 0.00 0.00 23:18:24 23:18:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:18:24 23:18:01 ens3 1656.39 946.83 34058.49 149.38 0.00 0.00 0.00 0.00 23:18:24 23:18:01 lo 35.71 35.71 6.25 6.25 0.00 0.00 0.00 0.00 23:18:24 Average: docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:18:24 Average: ens3 190.90 103.73 4755.25 13.38 0.00 0.00 0.00 0.00 23:18:24 Average: lo 4.54 4.54 0.84 0.84 0.00 0.00 0.00 0.00 23:18:24 23:18:24 23:18:24 ---> sar -P ALL: 23:18:24 Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-12229) 03/09/24 _x86_64_ (8 CPU) 23:18:24 23:18:24 23:10:24 LINUX RESTART (8 CPU) 23:18:24 23:18:24 23:11:01 CPU %user %nice %system %iowait %steal %idle 23:18:24 23:12:01 all 10.05 0.00 0.68 1.79 0.03 87.45 23:18:24 23:12:01 0 16.41 0.00 0.75 0.83 0.02 81.99 23:18:24 23:12:01 1 6.99 0.00 0.62 0.12 0.03 92.24 23:18:24 23:12:01 2 27.82 0.00 1.69 1.45 0.05 68.99 23:18:24 23:12:01 3 11.19 0.00 0.77 0.08 0.05 87.90 23:18:24 23:12:01 4 0.08 0.00 0.35 10.30 0.03 89.23 23:18:24 23:12:01 5 5.16 0.00 0.38 0.60 0.05 93.80 23:18:24 23:12:01 6 1.78 0.00 0.28 0.23 0.02 97.68 23:18:24 23:12:01 7 10.98 0.00 0.60 0.69 0.03 87.69 23:18:24 23:13:01 all 11.91 0.00 2.66 1.96 0.06 83.41 23:18:24 23:13:01 0 32.15 0.00 4.40 0.45 0.08 62.92 23:18:24 23:13:01 1 8.24 0.00 2.40 3.79 0.07 85.50 23:18:24 23:13:01 2 7.58 0.00 2.87 1.78 0.07 87.71 23:18:24 23:13:01 3 7.09 0.00 2.19 0.05 0.03 90.64 23:18:24 23:13:01 4 4.88 0.00 1.85 6.85 0.03 86.39 23:18:24 23:13:01 5 6.12 0.00 2.24 0.69 0.03 90.92 23:18:24 23:13:01 6 4.81 0.00 2.51 0.61 0.07 92.00 23:18:24 23:13:01 7 24.52 0.00 2.85 1.41 0.05 71.17 23:18:24 23:14:01 all 8.37 0.00 3.70 10.99 0.06 76.87 23:18:24 23:14:01 0 8.91 0.00 3.29 2.30 0.08 85.41 23:18:24 23:14:01 1 10.74 0.00 4.43 2.88 0.05 81.89 23:18:24 23:14:01 2 8.35 0.00 4.35 25.04 0.07 62.19 23:18:24 23:14:01 3 7.71 0.00 5.04 19.33 0.08 67.83 23:18:24 23:14:01 4 8.94 0.00 3.30 6.18 0.05 81.53 23:18:24 23:14:01 5 10.08 0.00 3.36 0.30 0.05 86.21 23:18:24 23:14:01 6 7.21 0.00 4.11 31.79 0.08 56.80 23:18:24 23:14:01 7 5.02 0.00 1.72 0.30 0.03 92.93 23:18:24 23:15:01 all 17.87 0.00 3.14 11.74 0.10 67.15 23:18:24 23:15:01 0 13.60 0.00 3.10 6.55 0.12 76.64 23:18:24 23:15:01 1 21.02 0.00 3.36 18.97 0.13 56.51 23:18:24 23:15:01 2 22.92 0.00 3.83 11.73 0.08 61.43 23:18:24 23:15:01 3 16.40 0.00 3.11 21.84 0.10 58.55 23:18:24 23:15:01 4 14.44 0.00 2.75 10.10 0.13 72.58 23:18:24 23:15:01 5 18.39 0.00 3.47 7.01 0.07 71.05 23:18:24 23:15:01 6 20.54 0.00 2.99 6.91 0.07 69.48 23:18:24 23:15:01 7 15.63 0.00 2.49 10.86 0.10 70.92 23:18:24 23:16:01 all 16.77 0.00 1.58 0.73 0.06 80.85 23:18:24 23:16:01 0 17.54 0.00 1.64 0.02 0.07 80.73 23:18:24 23:16:01 1 18.43 0.00 1.87 0.00 0.08 79.61 23:18:24 23:16:01 2 17.62 0.00 1.81 0.08 0.07 80.43 23:18:24 23:16:01 3 15.27 0.00 1.37 0.03 0.07 83.26 23:18:24 23:16:01 4 13.92 0.00 1.17 5.64 0.07 79.20 23:18:24 23:16:01 5 16.53 0.00 1.56 0.05 0.07 81.80 23:18:24 23:16:01 6 15.98 0.00 1.71 0.03 0.07 82.21 23:18:24 23:16:01 7 18.88 0.00 1.55 0.02 0.05 79.50 23:18:24 23:17:01 all 1.20 0.00 0.19 0.69 0.06 97.86 23:18:24 23:17:01 0 1.04 0.00 0.27 0.00 0.05 98.64 23:18:24 23:17:01 1 1.23 0.00 0.17 0.02 0.05 98.53 23:18:24 23:17:01 2 1.33 0.00 0.13 0.00 0.03 98.50 23:18:24 23:17:01 3 1.57 0.00 0.18 0.02 0.05 98.18 23:18:24 23:17:01 4 1.17 0.00 0.20 5.47 0.08 93.08 23:18:24 23:17:01 5 1.62 0.00 0.22 0.00 0.07 98.10 23:18:24 23:17:01 6 0.63 0.00 0.23 0.02 0.05 99.07 23:18:24 23:17:01 7 1.02 0.00 0.15 0.00 0.05 98.78 23:18:24 23:18:01 all 4.90 0.00 0.77 0.99 0.04 93.30 23:18:24 23:18:01 0 3.14 0.00 0.61 0.10 0.05 96.10 23:18:24 23:18:01 1 16.38 0.00 1.15 0.28 0.03 82.15 23:18:24 23:18:01 2 1.29 0.00 0.67 4.97 0.03 93.04 23:18:24 23:18:01 3 3.95 0.00 0.90 0.17 0.05 94.93 23:18:24 23:18:01 4 1.67 0.00 0.80 1.70 0.03 95.79 23:18:24 23:18:01 5 3.33 0.00 0.54 0.08 0.02 96.04 23:18:24 23:18:01 6 1.55 0.00 0.65 0.50 0.03 97.26 23:18:24 23:18:01 7 7.91 0.00 0.84 0.10 0.03 91.12 23:18:24 Average: all 10.14 0.00 1.81 4.11 0.06 83.87 23:18:24 Average: 0 13.22 0.00 2.00 1.46 0.07 83.25 23:18:24 Average: 1 11.86 0.00 2.00 3.72 0.06 82.36 23:18:24 Average: 2 12.41 0.00 2.18 6.40 0.06 78.95 23:18:24 Average: 3 9.02 0.00 1.93 5.89 0.06 83.10 23:18:24 Average: 4 6.43 0.00 1.48 6.61 0.06 85.42 23:18:24 Average: 5 8.74 0.00 1.68 1.25 0.05 88.29 23:18:24 Average: 6 7.49 0.00 1.78 5.67 0.06 85.01 23:18:24 Average: 7 11.98 0.00 1.45 1.90 0.05 84.61 23:18:24 23:18:24 23:18:24