23:11:01 Started by timer 23:11:01 Running as SYSTEM 23:11:01 [EnvInject] - Loading node environment variables. 23:11:01 Building remotely on prd-ubuntu1804-docker-8c-8g-12595 (ubuntu1804-docker-8c-8g) in workspace /w/workspace/policy-pap-master-project-csit-pap 23:11:01 [ssh-agent] Looking for ssh-agent implementation... 23:11:01 [ssh-agent] Exec ssh-agent (binary ssh-agent on a remote machine) 23:11:01 $ ssh-agent 23:11:01 SSH_AUTH_SOCK=/tmp/ssh-95KeyyjFsyyc/agent.2083 23:11:01 SSH_AGENT_PID=2085 23:11:01 [ssh-agent] Started. 23:11:01 Running ssh-add (command line suppressed) 23:11:01 Identity added: /w/workspace/policy-pap-master-project-csit-pap@tmp/private_key_7109205230616855844.key (/w/workspace/policy-pap-master-project-csit-pap@tmp/private_key_7109205230616855844.key) 23:11:01 [ssh-agent] Using credentials onap-jobbuiler (Gerrit user) 23:11:01 The recommended git tool is: NONE 23:11:02 using credential onap-jenkins-ssh 23:11:02 Wiping out workspace first. 23:11:02 Cloning the remote Git repository 23:11:03 Cloning repository git://cloud.onap.org/mirror/policy/docker.git 23:11:03 > git init /w/workspace/policy-pap-master-project-csit-pap # timeout=10 23:11:03 Fetching upstream changes from git://cloud.onap.org/mirror/policy/docker.git 23:11:03 > git --version # timeout=10 23:11:03 > git --version # 'git version 2.17.1' 23:11:03 using GIT_SSH to set credentials Gerrit user 23:11:03 Verifying host key using manually-configured host key entries 23:11:03 > git fetch --tags --progress -- git://cloud.onap.org/mirror/policy/docker.git +refs/heads/*:refs/remotes/origin/* # timeout=30 23:11:03 > git config remote.origin.url git://cloud.onap.org/mirror/policy/docker.git # timeout=10 23:11:03 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10 23:11:03 Avoid second fetch 23:11:03 > git rev-parse refs/remotes/origin/master^{commit} # timeout=10 23:11:03 Checking out Revision 9e33a52d0cf03c0458911330fb72037d01b07a4a (refs/remotes/origin/master) 23:11:03 > git config core.sparsecheckout # timeout=10 23:11:03 > git checkout -f 9e33a52d0cf03c0458911330fb72037d01b07a4a # timeout=30 23:11:04 Commit message: "Add Prometheus config for http and k8s participants in csit" 23:11:04 > git rev-list --no-walk 5582cd406c8414919c4d5d7f5b116f4f1e5a971d # timeout=10 23:11:04 provisioning config files... 23:11:04 copy managed file [npmrc] to file:/home/jenkins/.npmrc 23:11:04 copy managed file [pipconf] to file:/home/jenkins/.config/pip/pip.conf 23:11:04 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins17606403569575929373.sh 23:11:04 ---> python-tools-install.sh 23:11:04 Setup pyenv: 23:11:04 * system (set by /opt/pyenv/version) 23:11:04 * 3.8.13 (set by /opt/pyenv/version) 23:11:04 * 3.9.13 (set by /opt/pyenv/version) 23:11:04 * 3.10.6 (set by /opt/pyenv/version) 23:11:08 lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-OU5U 23:11:08 lf-activate-venv(): INFO: Save venv in file: /tmp/.os_lf_venv 23:11:11 lf-activate-venv(): INFO: Installing: lftools 23:11:44 lf-activate-venv(): INFO: Adding /tmp/venv-OU5U/bin to PATH 23:11:44 Generating Requirements File 23:12:12 Python 3.10.6 23:12:12 pip 24.0 from /tmp/venv-OU5U/lib/python3.10/site-packages/pip (python 3.10) 23:12:13 appdirs==1.4.4 23:12:13 argcomplete==3.2.3 23:12:13 aspy.yaml==1.3.0 23:12:13 attrs==23.2.0 23:12:13 autopage==0.5.2 23:12:13 beautifulsoup4==4.12.3 23:12:13 boto3==1.34.60 23:12:13 botocore==1.34.60 23:12:13 bs4==0.0.2 23:12:13 cachetools==5.3.3 23:12:13 certifi==2024.2.2 23:12:13 cffi==1.16.0 23:12:13 cfgv==3.4.0 23:12:13 chardet==5.2.0 23:12:13 charset-normalizer==3.3.2 23:12:13 click==8.1.7 23:12:13 cliff==4.6.0 23:12:13 cmd2==2.4.3 23:12:13 cryptography==3.3.2 23:12:13 debtcollector==3.0.0 23:12:13 decorator==5.1.1 23:12:13 defusedxml==0.7.1 23:12:13 Deprecated==1.2.14 23:12:13 distlib==0.3.8 23:12:13 dnspython==2.6.1 23:12:13 docker==4.2.2 23:12:13 dogpile.cache==1.3.2 23:12:13 email_validator==2.1.1 23:12:13 filelock==3.13.1 23:12:13 future==1.0.0 23:12:13 gitdb==4.0.11 23:12:13 GitPython==3.1.42 23:12:13 google-auth==2.28.2 23:12:13 httplib2==0.22.0 23:12:13 identify==2.5.35 23:12:13 idna==3.6 23:12:13 importlib-resources==1.5.0 23:12:13 iso8601==2.1.0 23:12:13 Jinja2==3.1.3 23:12:13 jmespath==1.0.1 23:12:13 jsonpatch==1.33 23:12:13 jsonpointer==2.4 23:12:13 jsonschema==4.21.1 23:12:13 jsonschema-specifications==2023.12.1 23:12:13 keystoneauth1==5.6.0 23:12:13 kubernetes==29.0.0 23:12:13 lftools==0.37.9 23:12:13 lxml==5.1.0 23:12:13 MarkupSafe==2.1.5 23:12:13 msgpack==1.0.8 23:12:13 multi_key_dict==2.0.3 23:12:13 netaddr==1.2.1 23:12:13 netifaces==0.11.0 23:12:13 niet==1.4.2 23:12:13 nodeenv==1.8.0 23:12:13 oauth2client==4.1.3 23:12:13 oauthlib==3.2.2 23:12:13 openstacksdk==3.0.0 23:12:13 os-client-config==2.1.0 23:12:13 os-service-types==1.7.0 23:12:13 osc-lib==3.0.1 23:12:13 oslo.config==9.4.0 23:12:13 oslo.context==5.5.0 23:12:13 oslo.i18n==6.3.0 23:12:13 oslo.log==5.5.0 23:12:13 oslo.serialization==5.4.0 23:12:13 oslo.utils==7.1.0 23:12:13 packaging==24.0 23:12:13 pbr==6.0.0 23:12:13 platformdirs==4.2.0 23:12:13 prettytable==3.10.0 23:12:13 pyasn1==0.5.1 23:12:13 pyasn1-modules==0.3.0 23:12:13 pycparser==2.21 23:12:13 pygerrit2==2.0.15 23:12:13 PyGithub==2.2.0 23:12:13 pyinotify==0.9.6 23:12:13 PyJWT==2.8.0 23:12:13 PyNaCl==1.5.0 23:12:13 pyparsing==2.4.7 23:12:13 pyperclip==1.8.2 23:12:13 pyrsistent==0.20.0 23:12:13 python-cinderclient==9.5.0 23:12:13 python-dateutil==2.9.0.post0 23:12:13 python-heatclient==3.5.0 23:12:13 python-jenkins==1.8.2 23:12:13 python-keystoneclient==5.4.0 23:12:13 python-magnumclient==4.4.0 23:12:13 python-novaclient==18.5.0 23:12:13 python-openstackclient==6.5.0 23:12:13 python-swiftclient==4.5.0 23:12:13 PyYAML==6.0.1 23:12:13 referencing==0.33.0 23:12:13 requests==2.31.0 23:12:13 requests-oauthlib==1.4.0 23:12:13 requestsexceptions==1.4.0 23:12:13 rfc3986==2.0.0 23:12:13 rpds-py==0.18.0 23:12:13 rsa==4.9 23:12:13 ruamel.yaml==0.18.6 23:12:13 ruamel.yaml.clib==0.2.8 23:12:13 s3transfer==0.10.0 23:12:13 simplejson==3.19.2 23:12:13 six==1.16.0 23:12:13 smmap==5.0.1 23:12:13 soupsieve==2.5 23:12:13 stevedore==5.2.0 23:12:13 tabulate==0.9.0 23:12:13 toml==0.10.2 23:12:13 tomlkit==0.12.4 23:12:13 tqdm==4.66.2 23:12:13 typing_extensions==4.10.0 23:12:13 tzdata==2024.1 23:12:13 urllib3==1.26.18 23:12:13 virtualenv==20.25.1 23:12:13 wcwidth==0.2.13 23:12:13 websocket-client==1.7.0 23:12:13 wrapt==1.16.0 23:12:13 xdg==6.0.0 23:12:13 xmltodict==0.13.0 23:12:13 yq==3.2.3 23:12:13 [EnvInject] - Injecting environment variables from a build step. 23:12:13 [EnvInject] - Injecting as environment variables the properties content 23:12:13 SET_JDK_VERSION=openjdk17 23:12:13 GIT_URL="git://cloud.onap.org/mirror" 23:12:13 23:12:13 [EnvInject] - Variables injected successfully. 23:12:13 [policy-pap-master-project-csit-pap] $ /bin/sh /tmp/jenkins2023966637108123372.sh 23:12:13 ---> update-java-alternatives.sh 23:12:13 ---> Updating Java version 23:12:13 ---> Ubuntu/Debian system detected 23:12:13 update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/java to provide /usr/bin/java (java) in manual mode 23:12:13 update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/javac to provide /usr/bin/javac (javac) in manual mode 23:12:13 update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64 to provide /usr/lib/jvm/java-openjdk (java_sdk_openjdk) in manual mode 23:12:13 openjdk version "17.0.4" 2022-07-19 23:12:13 OpenJDK Runtime Environment (build 17.0.4+8-Ubuntu-118.04) 23:12:13 OpenJDK 64-Bit Server VM (build 17.0.4+8-Ubuntu-118.04, mixed mode, sharing) 23:12:14 JAVA_HOME=/usr/lib/jvm/java-17-openjdk-amd64 23:12:14 [EnvInject] - Injecting environment variables from a build step. 23:12:14 [EnvInject] - Injecting as environment variables the properties file path '/tmp/java.env' 23:12:14 [EnvInject] - Variables injected successfully. 23:12:14 [policy-pap-master-project-csit-pap] $ /bin/sh -xe /tmp/jenkins286296263146529350.sh 23:12:14 + /w/workspace/policy-pap-master-project-csit-pap/csit/run-project-csit.sh pap 23:12:14 + set +u 23:12:14 + save_set 23:12:14 + RUN_CSIT_SAVE_SET=ehxB 23:12:14 + RUN_CSIT_SHELLOPTS=braceexpand:errexit:hashall:interactive-comments:pipefail:xtrace 23:12:14 + '[' 1 -eq 0 ']' 23:12:14 + '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' 23:12:14 + export PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 23:12:14 + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 23:12:14 + export SCRIPTS=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts 23:12:14 + SCRIPTS=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts 23:12:14 + export ROBOT_VARIABLES= 23:12:14 + ROBOT_VARIABLES= 23:12:14 + export PROJECT=pap 23:12:14 + PROJECT=pap 23:12:14 + cd /w/workspace/policy-pap-master-project-csit-pap 23:12:14 + rm -rf /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap 23:12:14 + mkdir -p /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap 23:12:14 + source_safely /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/prepare-robot-env.sh 23:12:14 + '[' -z /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/prepare-robot-env.sh ']' 23:12:14 + relax_set 23:12:14 + set +e 23:12:14 + set +o pipefail 23:12:14 + . /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/prepare-robot-env.sh 23:12:14 ++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' 23:12:14 +++ mktemp -d 23:12:14 ++ ROBOT_VENV=/tmp/tmp.TBGPmFgVaW 23:12:14 ++ echo ROBOT_VENV=/tmp/tmp.TBGPmFgVaW 23:12:14 +++ python3 --version 23:12:14 ++ echo 'Python version is: Python 3.6.9' 23:12:14 Python version is: Python 3.6.9 23:12:14 ++ python3 -m venv --clear /tmp/tmp.TBGPmFgVaW 23:12:15 ++ source /tmp/tmp.TBGPmFgVaW/bin/activate 23:12:15 +++ deactivate nondestructive 23:12:15 +++ '[' -n '' ']' 23:12:15 +++ '[' -n '' ']' 23:12:15 +++ '[' -n /bin/bash -o -n '' ']' 23:12:15 +++ hash -r 23:12:15 +++ '[' -n '' ']' 23:12:15 +++ unset VIRTUAL_ENV 23:12:15 +++ '[' '!' nondestructive = nondestructive ']' 23:12:15 +++ VIRTUAL_ENV=/tmp/tmp.TBGPmFgVaW 23:12:15 +++ export VIRTUAL_ENV 23:12:15 +++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 23:12:15 +++ PATH=/tmp/tmp.TBGPmFgVaW/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 23:12:15 +++ export PATH 23:12:15 +++ '[' -n '' ']' 23:12:15 +++ '[' -z '' ']' 23:12:15 +++ _OLD_VIRTUAL_PS1= 23:12:15 +++ '[' 'x(tmp.TBGPmFgVaW) ' '!=' x ']' 23:12:15 +++ PS1='(tmp.TBGPmFgVaW) ' 23:12:15 +++ export PS1 23:12:15 +++ '[' -n /bin/bash -o -n '' ']' 23:12:15 +++ hash -r 23:12:15 ++ set -exu 23:12:15 ++ python3 -m pip install -qq --upgrade 'pip<=23.0' 'setuptools<=66.1.1' 23:12:18 ++ echo 'Installing Python Requirements' 23:12:18 Installing Python Requirements 23:12:18 ++ python3 -m pip install -qq -r /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/pylibs.txt 23:12:37 ++ python3 -m pip -qq freeze 23:12:37 bcrypt==4.0.1 23:12:37 beautifulsoup4==4.12.3 23:12:37 bitarray==2.9.2 23:12:37 certifi==2024.2.2 23:12:37 cffi==1.15.1 23:12:37 charset-normalizer==2.0.12 23:12:37 cryptography==40.0.2 23:12:37 decorator==5.1.1 23:12:37 elasticsearch==7.17.9 23:12:37 elasticsearch-dsl==7.4.1 23:12:37 enum34==1.1.10 23:12:37 idna==3.6 23:12:37 importlib-resources==5.4.0 23:12:37 ipaddr==2.2.0 23:12:37 isodate==0.6.1 23:12:37 jmespath==0.10.0 23:12:37 jsonpatch==1.32 23:12:37 jsonpath-rw==1.4.0 23:12:37 jsonpointer==2.3 23:12:37 lxml==5.1.0 23:12:37 netaddr==0.8.0 23:12:37 netifaces==0.11.0 23:12:37 odltools==0.1.28 23:12:37 paramiko==3.4.0 23:12:37 pkg_resources==0.0.0 23:12:37 ply==3.11 23:12:37 pyang==2.6.0 23:12:37 pyangbind==0.8.1 23:12:37 pycparser==2.21 23:12:37 pyhocon==0.3.60 23:12:37 PyNaCl==1.5.0 23:12:37 pyparsing==3.1.2 23:12:37 python-dateutil==2.9.0.post0 23:12:37 regex==2023.8.8 23:12:37 requests==2.27.1 23:12:37 robotframework==6.1.1 23:12:37 robotframework-httplibrary==0.4.2 23:12:37 robotframework-pythonlibcore==3.0.0 23:12:37 robotframework-requests==0.9.4 23:12:37 robotframework-selenium2library==3.0.0 23:12:37 robotframework-seleniumlibrary==5.1.3 23:12:37 robotframework-sshlibrary==3.8.0 23:12:37 scapy==2.5.0 23:12:37 scp==0.14.5 23:12:37 selenium==3.141.0 23:12:37 six==1.16.0 23:12:37 soupsieve==2.3.2.post1 23:12:37 urllib3==1.26.18 23:12:37 waitress==2.0.0 23:12:37 WebOb==1.8.7 23:12:37 WebTest==3.0.0 23:12:37 zipp==3.6.0 23:12:37 ++ mkdir -p /tmp/tmp.TBGPmFgVaW/src/onap 23:12:37 ++ rm -rf /tmp/tmp.TBGPmFgVaW/src/onap/testsuite 23:12:37 ++ python3 -m pip install -qq --upgrade --extra-index-url=https://nexus3.onap.org/repository/PyPi.staging/simple 'robotframework-onap==0.6.0.*' --pre 23:12:43 ++ echo 'Installing python confluent-kafka library' 23:12:43 Installing python confluent-kafka library 23:12:43 ++ python3 -m pip install -qq confluent-kafka 23:12:44 ++ echo 'Uninstall docker-py and reinstall docker.' 23:12:44 Uninstall docker-py and reinstall docker. 23:12:44 ++ python3 -m pip uninstall -y -qq docker 23:12:45 ++ python3 -m pip install -U -qq docker 23:12:46 ++ python3 -m pip -qq freeze 23:12:46 bcrypt==4.0.1 23:12:46 beautifulsoup4==4.12.3 23:12:46 bitarray==2.9.2 23:12:46 certifi==2024.2.2 23:12:46 cffi==1.15.1 23:12:46 charset-normalizer==2.0.12 23:12:46 confluent-kafka==2.3.0 23:12:46 cryptography==40.0.2 23:12:46 decorator==5.1.1 23:12:46 deepdiff==5.7.0 23:12:46 dnspython==2.2.1 23:12:46 docker==5.0.3 23:12:46 elasticsearch==7.17.9 23:12:46 elasticsearch-dsl==7.4.1 23:12:46 enum34==1.1.10 23:12:46 future==1.0.0 23:12:46 idna==3.6 23:12:46 importlib-resources==5.4.0 23:12:46 ipaddr==2.2.0 23:12:46 isodate==0.6.1 23:12:46 Jinja2==3.0.3 23:12:46 jmespath==0.10.0 23:12:46 jsonpatch==1.32 23:12:46 jsonpath-rw==1.4.0 23:12:46 jsonpointer==2.3 23:12:46 kafka-python==2.0.2 23:12:46 lxml==5.1.0 23:12:46 MarkupSafe==2.0.1 23:12:46 more-itertools==5.0.0 23:12:46 netaddr==0.8.0 23:12:46 netifaces==0.11.0 23:12:46 odltools==0.1.28 23:12:46 ordered-set==4.0.2 23:12:46 paramiko==3.4.0 23:12:46 pbr==6.0.0 23:12:46 pkg_resources==0.0.0 23:12:46 ply==3.11 23:12:46 protobuf==3.19.6 23:12:46 pyang==2.6.0 23:12:46 pyangbind==0.8.1 23:12:46 pycparser==2.21 23:12:46 pyhocon==0.3.60 23:12:46 PyNaCl==1.5.0 23:12:46 pyparsing==3.1.2 23:12:46 python-dateutil==2.9.0.post0 23:12:46 PyYAML==6.0.1 23:12:46 regex==2023.8.8 23:12:46 requests==2.27.1 23:12:46 robotframework==6.1.1 23:12:46 robotframework-httplibrary==0.4.2 23:12:46 robotframework-onap==0.6.0.dev105 23:12:46 robotframework-pythonlibcore==3.0.0 23:12:46 robotframework-requests==0.9.4 23:12:46 robotframework-selenium2library==3.0.0 23:12:46 robotframework-seleniumlibrary==5.1.3 23:12:46 robotframework-sshlibrary==3.8.0 23:12:46 robotlibcore-temp==1.0.2 23:12:46 scapy==2.5.0 23:12:46 scp==0.14.5 23:12:46 selenium==3.141.0 23:12:46 six==1.16.0 23:12:46 soupsieve==2.3.2.post1 23:12:46 urllib3==1.26.18 23:12:46 waitress==2.0.0 23:12:46 WebOb==1.8.7 23:12:46 websocket-client==1.3.1 23:12:46 WebTest==3.0.0 23:12:46 zipp==3.6.0 23:12:46 ++ grep -q Linux 23:12:46 ++ uname 23:12:46 ++ sudo apt-get -y -qq install libxml2-utils 23:12:47 + load_set 23:12:47 + _setopts=ehuxB 23:12:47 ++ echo braceexpand:hashall:interactive-comments:nounset:xtrace 23:12:47 ++ tr : ' ' 23:12:47 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:47 + set +o braceexpand 23:12:47 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:47 + set +o hashall 23:12:47 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:47 + set +o interactive-comments 23:12:47 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:47 + set +o nounset 23:12:47 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:47 + set +o xtrace 23:12:47 ++ sed 's/./& /g' 23:12:47 ++ echo ehuxB 23:12:47 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:12:47 + set +e 23:12:47 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:12:47 + set +h 23:12:47 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:12:47 + set +u 23:12:47 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:12:47 + set +x 23:12:47 + source_safely /tmp/tmp.TBGPmFgVaW/bin/activate 23:12:47 + '[' -z /tmp/tmp.TBGPmFgVaW/bin/activate ']' 23:12:47 + relax_set 23:12:47 + set +e 23:12:47 + set +o pipefail 23:12:47 + . /tmp/tmp.TBGPmFgVaW/bin/activate 23:12:47 ++ deactivate nondestructive 23:12:47 ++ '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin ']' 23:12:47 ++ PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 23:12:47 ++ export PATH 23:12:47 ++ unset _OLD_VIRTUAL_PATH 23:12:47 ++ '[' -n '' ']' 23:12:47 ++ '[' -n /bin/bash -o -n '' ']' 23:12:47 ++ hash -r 23:12:47 ++ '[' -n '' ']' 23:12:47 ++ unset VIRTUAL_ENV 23:12:47 ++ '[' '!' nondestructive = nondestructive ']' 23:12:47 ++ VIRTUAL_ENV=/tmp/tmp.TBGPmFgVaW 23:12:47 ++ export VIRTUAL_ENV 23:12:47 ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 23:12:47 ++ PATH=/tmp/tmp.TBGPmFgVaW/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 23:12:47 ++ export PATH 23:12:47 ++ '[' -n '' ']' 23:12:47 ++ '[' -z '' ']' 23:12:47 ++ _OLD_VIRTUAL_PS1='(tmp.TBGPmFgVaW) ' 23:12:47 ++ '[' 'x(tmp.TBGPmFgVaW) ' '!=' x ']' 23:12:47 ++ PS1='(tmp.TBGPmFgVaW) (tmp.TBGPmFgVaW) ' 23:12:47 ++ export PS1 23:12:47 ++ '[' -n /bin/bash -o -n '' ']' 23:12:47 ++ hash -r 23:12:47 + load_set 23:12:47 + _setopts=hxB 23:12:47 ++ echo braceexpand:hashall:interactive-comments:xtrace 23:12:47 ++ tr : ' ' 23:12:47 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:47 + set +o braceexpand 23:12:47 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:47 + set +o hashall 23:12:47 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:47 + set +o interactive-comments 23:12:47 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:47 + set +o xtrace 23:12:47 ++ echo hxB 23:12:47 ++ sed 's/./& /g' 23:12:47 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:12:47 + set +h 23:12:47 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:12:47 + set +x 23:12:47 + export TEST_PLAN_DIR=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests 23:12:47 + TEST_PLAN_DIR=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests 23:12:47 + export TEST_OPTIONS= 23:12:47 + TEST_OPTIONS= 23:12:47 ++ mktemp -d 23:12:47 + WORKDIR=/tmp/tmp.xIYONv2aFw 23:12:47 + cd /tmp/tmp.xIYONv2aFw 23:12:47 + docker login -u docker -p docker nexus3.onap.org:10001 23:12:47 WARNING! Using --password via the CLI is insecure. Use --password-stdin. 23:12:47 WARNING! Your password will be stored unencrypted in /home/jenkins/.docker/config.json. 23:12:47 Configure a credential helper to remove this warning. See 23:12:47 https://docs.docker.com/engine/reference/commandline/login/#credentials-store 23:12:47 23:12:47 Login Succeeded 23:12:47 + SETUP=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh 23:12:47 + '[' -f /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh ']' 23:12:47 + echo 'Running setup script /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh' 23:12:47 Running setup script /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh 23:12:47 + source_safely /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh 23:12:47 + '[' -z /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh ']' 23:12:47 + relax_set 23:12:47 + set +e 23:12:47 + set +o pipefail 23:12:47 + . /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh 23:12:47 ++ source /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/node-templates.sh 23:12:47 +++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' 23:12:47 ++++ awk -F= '$1 == "defaultbranch" { print $2 }' /w/workspace/policy-pap-master-project-csit-pap/.gitreview 23:12:47 +++ GERRIT_BRANCH=master 23:12:47 +++ echo GERRIT_BRANCH=master 23:12:47 GERRIT_BRANCH=master 23:12:47 +++ rm -rf /w/workspace/policy-pap-master-project-csit-pap/models 23:12:47 +++ mkdir /w/workspace/policy-pap-master-project-csit-pap/models 23:12:47 +++ git clone -b master --single-branch https://github.com/onap/policy-models.git /w/workspace/policy-pap-master-project-csit-pap/models 23:12:47 Cloning into '/w/workspace/policy-pap-master-project-csit-pap/models'... 23:12:48 +++ export DATA=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies 23:12:48 +++ DATA=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies 23:12:48 +++ export NODETEMPLATES=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates 23:12:48 +++ NODETEMPLATES=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates 23:12:48 +++ sed -e 's!Measurement_vGMUX!ADifferentValue!' /w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies/vCPE.policy.monitoring.input.tosca.json 23:12:48 +++ sed -e 's!"version": "1.0.0"!"version": "2.0.0"!' -e 's!"policy-version": 1!"policy-version": 2!' /w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies/vCPE.policy.monitoring.input.tosca.json 23:12:48 ++ source /w/workspace/policy-pap-master-project-csit-pap/compose/start-compose.sh apex-pdp --grafana 23:12:48 +++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' 23:12:48 +++ COMPOSE_FOLDER=/w/workspace/policy-pap-master-project-csit-pap/compose 23:12:48 +++ grafana=false 23:12:48 +++ gui=false 23:12:48 +++ [[ 2 -gt 0 ]] 23:12:48 +++ key=apex-pdp 23:12:48 +++ case $key in 23:12:48 +++ echo apex-pdp 23:12:48 apex-pdp 23:12:48 +++ component=apex-pdp 23:12:48 +++ shift 23:12:48 +++ [[ 1 -gt 0 ]] 23:12:48 +++ key=--grafana 23:12:48 +++ case $key in 23:12:48 +++ grafana=true 23:12:48 +++ shift 23:12:48 +++ [[ 0 -gt 0 ]] 23:12:48 +++ cd /w/workspace/policy-pap-master-project-csit-pap/compose 23:12:48 +++ echo 'Configuring docker compose...' 23:12:48 Configuring docker compose... 23:12:48 +++ source export-ports.sh 23:12:48 +++ source get-versions.sh 23:12:50 +++ '[' -z pap ']' 23:12:50 +++ '[' -n apex-pdp ']' 23:12:50 +++ '[' apex-pdp == logs ']' 23:12:50 +++ '[' true = true ']' 23:12:50 +++ echo 'Starting apex-pdp application with Grafana' 23:12:50 Starting apex-pdp application with Grafana 23:12:50 +++ docker-compose up -d apex-pdp grafana 23:12:51 Creating network "compose_default" with the default driver 23:12:51 Pulling prometheus (nexus3.onap.org:10001/prom/prometheus:latest)... 23:12:51 latest: Pulling from prom/prometheus 23:12:54 Digest: sha256:bc1794e85c9e00293351b967efa267ce6af1c824ac875a9d0c7ac84700a8b53e 23:12:54 Status: Downloaded newer image for nexus3.onap.org:10001/prom/prometheus:latest 23:12:54 Pulling grafana (nexus3.onap.org:10001/grafana/grafana:latest)... 23:12:55 latest: Pulling from grafana/grafana 23:13:00 Digest: sha256:f9811e4e687ffecf1a43adb9b64096c50bc0d7a782f8608530f478b6542de7d5 23:13:00 Status: Downloaded newer image for nexus3.onap.org:10001/grafana/grafana:latest 23:13:00 Pulling mariadb (nexus3.onap.org:10001/mariadb:10.10.2)... 23:13:00 10.10.2: Pulling from mariadb 23:13:05 Digest: sha256:bfc25a68e113de43d0d112f5a7126df8e278579c3224e3923359e1c1d8d5ce6e 23:13:05 Status: Downloaded newer image for nexus3.onap.org:10001/mariadb:10.10.2 23:13:05 Pulling simulator (nexus3.onap.org:10001/onap/policy-models-simulator:3.1.2-SNAPSHOT)... 23:13:05 3.1.2-SNAPSHOT: Pulling from onap/policy-models-simulator 23:13:09 Digest: sha256:5772a5c551b30d73f901debb8dc38f305559b920e248a9ccb1dba3b880278a13 23:13:09 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-models-simulator:3.1.2-SNAPSHOT 23:13:09 Pulling zookeeper (confluentinc/cp-zookeeper:latest)... 23:13:09 latest: Pulling from confluentinc/cp-zookeeper 23:13:22 Digest: sha256:9babd1c0beaf93189982bdbb9fe4bf194a2730298b640c057817746c19838866 23:13:22 Status: Downloaded newer image for confluentinc/cp-zookeeper:latest 23:13:22 Pulling kafka (confluentinc/cp-kafka:latest)... 23:13:22 latest: Pulling from confluentinc/cp-kafka 23:13:28 Digest: sha256:24cdd3a7fa89d2bed150560ebea81ff1943badfa61e51d66bb541a6b0d7fb047 23:13:28 Status: Downloaded newer image for confluentinc/cp-kafka:latest 23:13:28 Pulling policy-db-migrator (nexus3.onap.org:10001/onap/policy-db-migrator:3.1.2-SNAPSHOT)... 23:13:28 3.1.2-SNAPSHOT: Pulling from onap/policy-db-migrator 23:13:33 Digest: sha256:37b4f26d0170f90ca974aea8100c4fea8bf2a2b3b5cdb1e4e7c97492d3a4ad6a 23:13:33 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-db-migrator:3.1.2-SNAPSHOT 23:13:33 Pulling api (nexus3.onap.org:10001/onap/policy-api:3.1.2-SNAPSHOT)... 23:13:34 3.1.2-SNAPSHOT: Pulling from onap/policy-api 23:13:42 Digest: sha256:fdc9aa26830be0af882248f5f576f0e9466b8e17ff432e8618d01432efa85803 23:13:42 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-api:3.1.2-SNAPSHOT 23:13:42 Pulling pap (nexus3.onap.org:10001/onap/policy-pap:3.1.2-SNAPSHOT)... 23:13:43 3.1.2-SNAPSHOT: Pulling from onap/policy-pap 23:13:45 Digest: sha256:5e7bdea16830f0dd3e16df519f0efbee38922192c2a79297bcac6699fa44e067 23:13:45 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-pap:3.1.2-SNAPSHOT 23:13:45 Pulling apex-pdp (nexus3.onap.org:10001/onap/policy-apex-pdp:3.1.2-SNAPSHOT)... 23:13:45 3.1.2-SNAPSHOT: Pulling from onap/policy-apex-pdp 23:13:55 Digest: sha256:6150a977631ab72b68f6d8aef4c9bd1e7c9ba8079ef3864510ec09056daa579d 23:13:55 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-apex-pdp:3.1.2-SNAPSHOT 23:13:55 Creating prometheus ... 23:13:55 Creating compose_zookeeper_1 ... 23:13:55 Creating simulator ... 23:13:55 Creating mariadb ... 23:14:06 Creating compose_zookeeper_1 ... done 23:14:06 Creating kafka ... 23:14:07 Creating kafka ... done 23:14:08 Creating mariadb ... done 23:14:08 Creating policy-db-migrator ... 23:14:09 Creating policy-db-migrator ... done 23:14:09 Creating policy-api ... 23:14:10 Creating policy-api ... done 23:14:10 Creating policy-pap ... 23:14:11 Creating simulator ... done 23:14:12 Creating prometheus ... done 23:14:12 Creating grafana ... 23:14:13 Creating policy-pap ... done 23:14:13 Creating policy-apex-pdp ... 23:14:13 Creating grafana ... done 23:14:15 Creating policy-apex-pdp ... done 23:14:15 +++ echo 'Prometheus server: http://localhost:30259' 23:14:15 Prometheus server: http://localhost:30259 23:14:15 +++ echo 'Grafana server: http://localhost:30269' 23:14:15 Grafana server: http://localhost:30269 23:14:15 +++ cd /w/workspace/policy-pap-master-project-csit-pap 23:14:15 ++ sleep 10 23:14:25 ++ unset http_proxy https_proxy 23:14:25 ++ bash /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/wait_for_rest.sh localhost 30003 23:14:25 Waiting for REST to come up on localhost port 30003... 23:14:25 NAMES STATUS 23:14:25 policy-apex-pdp Up 10 seconds 23:14:25 grafana Up 11 seconds 23:14:25 policy-pap Up 12 seconds 23:14:25 policy-api Up 15 seconds 23:14:25 kafka Up 18 seconds 23:14:25 compose_zookeeper_1 Up 19 seconds 23:14:25 mariadb Up 17 seconds 23:14:25 simulator Up 13 seconds 23:14:25 prometheus Up 12 seconds 23:14:30 NAMES STATUS 23:14:30 policy-apex-pdp Up 15 seconds 23:14:30 grafana Up 16 seconds 23:14:30 policy-pap Up 17 seconds 23:14:30 policy-api Up 20 seconds 23:14:30 kafka Up 23 seconds 23:14:30 compose_zookeeper_1 Up 24 seconds 23:14:30 mariadb Up 22 seconds 23:14:30 simulator Up 19 seconds 23:14:30 prometheus Up 17 seconds 23:14:35 NAMES STATUS 23:14:35 policy-apex-pdp Up 20 seconds 23:14:35 grafana Up 21 seconds 23:14:35 policy-pap Up 22 seconds 23:14:35 policy-api Up 25 seconds 23:14:35 kafka Up 28 seconds 23:14:35 compose_zookeeper_1 Up 29 seconds 23:14:35 mariadb Up 27 seconds 23:14:35 simulator Up 24 seconds 23:14:35 prometheus Up 23 seconds 23:14:40 NAMES STATUS 23:14:40 policy-apex-pdp Up 25 seconds 23:14:40 grafana Up 26 seconds 23:14:40 policy-pap Up 27 seconds 23:14:40 policy-api Up 30 seconds 23:14:40 kafka Up 33 seconds 23:14:40 compose_zookeeper_1 Up 34 seconds 23:14:40 mariadb Up 32 seconds 23:14:40 simulator Up 29 seconds 23:14:40 prometheus Up 28 seconds 23:14:45 NAMES STATUS 23:14:45 policy-apex-pdp Up 30 seconds 23:14:45 grafana Up 31 seconds 23:14:45 policy-pap Up 32 seconds 23:14:45 policy-api Up 35 seconds 23:14:45 kafka Up 38 seconds 23:14:45 compose_zookeeper_1 Up 39 seconds 23:14:45 mariadb Up 37 seconds 23:14:45 simulator Up 34 seconds 23:14:45 prometheus Up 33 seconds 23:14:50 NAMES STATUS 23:14:50 policy-apex-pdp Up 35 seconds 23:14:50 grafana Up 36 seconds 23:14:50 policy-pap Up 37 seconds 23:14:50 policy-api Up 40 seconds 23:14:50 kafka Up 43 seconds 23:14:50 compose_zookeeper_1 Up 44 seconds 23:14:50 mariadb Up 42 seconds 23:14:50 simulator Up 39 seconds 23:14:50 prometheus Up 38 seconds 23:14:50 ++ export 'SUITES=pap-test.robot 23:14:50 pap-slas.robot' 23:14:50 ++ SUITES='pap-test.robot 23:14:50 pap-slas.robot' 23:14:50 ++ ROBOT_VARIABLES='-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 23:14:50 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates' 23:14:50 + load_set 23:14:50 + _setopts=hxB 23:14:50 ++ echo braceexpand:hashall:interactive-comments:xtrace 23:14:50 ++ tr : ' ' 23:14:50 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:14:50 + set +o braceexpand 23:14:50 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:14:50 + set +o hashall 23:14:50 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:14:50 + set +o interactive-comments 23:14:50 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:14:50 + set +o xtrace 23:14:50 ++ echo hxB 23:14:50 ++ sed 's/./& /g' 23:14:50 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:14:50 + set +h 23:14:50 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:14:50 + set +x 23:14:50 + docker_stats 23:14:50 + tee /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap/_sysinfo-1-after-setup.txt 23:14:50 ++ uname -s 23:14:50 + '[' Linux == Darwin ']' 23:14:50 + sh -c 'top -bn1 | head -3' 23:14:50 top - 23:14:50 up 4 min, 0 users, load average: 4.11, 1.64, 0.64 23:14:50 Tasks: 211 total, 1 running, 131 sleeping, 0 stopped, 0 zombie 23:14:50 %Cpu(s): 14.1 us, 3.0 sy, 0.0 ni, 78.8 id, 4.0 wa, 0.0 hi, 0.1 si, 0.1 st 23:14:50 + echo 23:14:50 + sh -c 'free -h' 23:14:50 23:14:50 total used free shared buff/cache available 23:14:50 Mem: 31G 3.0G 22G 1.3M 6.4G 27G 23:14:50 Swap: 1.0G 0B 1.0G 23:14:50 + echo 23:14:50 + docker ps --format 'table {{ .Names }}\t{{ .Status }}' 23:14:50 23:14:51 NAMES STATUS 23:14:51 policy-apex-pdp Up 35 seconds 23:14:51 grafana Up 37 seconds 23:14:51 policy-pap Up 37 seconds 23:14:51 policy-api Up 40 seconds 23:14:51 kafka Up 43 seconds 23:14:51 compose_zookeeper_1 Up 44 seconds 23:14:51 mariadb Up 42 seconds 23:14:51 simulator Up 39 seconds 23:14:51 prometheus Up 38 seconds 23:14:51 + echo 23:14:51 23:14:51 + docker stats --no-stream 23:14:53 CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 23:14:53 8967f421c693 policy-apex-pdp 1.45% 186MiB / 31.41GiB 0.58% 7.38kB / 6.97kB 0B / 0B 48 23:14:53 acf9b5bde4c5 grafana 0.05% 58.36MiB / 31.41GiB 0.18% 18.6kB / 3.38kB 0B / 24.9MB 20 23:14:53 2c296206e5b6 policy-pap 40.20% 501.2MiB / 31.41GiB 1.56% 31.5kB / 33.6kB 0B / 153MB 62 23:14:53 23f9fce87df4 policy-api 0.11% 737.6MiB / 31.41GiB 2.29% 1MB / 737kB 0B / 0B 55 23:14:53 93ddc2c219aa kafka 0.52% 399.8MiB / 31.41GiB 1.24% 70.8kB / 74.4kB 0B / 508kB 83 23:14:53 4433b7ea6f05 compose_zookeeper_1 0.10% 100.1MiB / 31.41GiB 0.31% 57.7kB / 50.1kB 4.1kB / 393kB 60 23:14:53 f11edfef694c mariadb 0.01% 102MiB / 31.41GiB 0.32% 997kB / 1.19MB 11MB / 68.1MB 43 23:14:53 ca27f3deb272 simulator 0.07% 122MiB / 31.41GiB 0.38% 1.23kB / 0B 225kB / 0B 76 23:14:53 6c6ebfd82e7c prometheus 0.06% 18.91MiB / 31.41GiB 0.06% 27.6kB / 1.09kB 0B / 0B 10 23:14:53 + echo 23:14:53 23:14:53 + cd /tmp/tmp.xIYONv2aFw 23:14:53 + echo 'Reading the testplan:' 23:14:53 Reading the testplan: 23:14:53 + echo 'pap-test.robot 23:14:53 pap-slas.robot' 23:14:53 + egrep -v '(^[[:space:]]*#|^[[:space:]]*$)' 23:14:53 + sed 's|^|/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/|' 23:14:53 + cat testplan.txt 23:14:53 /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot 23:14:53 /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot 23:14:53 ++ xargs 23:14:53 + SUITES='/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot' 23:14:53 + echo 'ROBOT_VARIABLES=-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 23:14:53 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates' 23:14:53 ROBOT_VARIABLES=-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 23:14:53 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates 23:14:53 + echo 'Starting Robot test suites /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot ...' 23:14:53 Starting Robot test suites /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot ... 23:14:53 + relax_set 23:14:53 + set +e 23:14:53 + set +o pipefail 23:14:53 + python3 -m robot.run -N pap -v WORKSPACE:/tmp -v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot 23:14:53 ============================================================================== 23:14:53 pap 23:14:53 ============================================================================== 23:14:53 pap.Pap-Test 23:14:53 ============================================================================== 23:14:54 LoadPolicy :: Create a policy named 'onap.restart.tca' and version... | PASS | 23:14:54 ------------------------------------------------------------------------------ 23:14:55 LoadPolicyWithMetadataSet :: Create a policy named 'operational.ap... | PASS | 23:14:55 ------------------------------------------------------------------------------ 23:14:55 LoadNodeTemplates :: Create node templates in database using speci... | PASS | 23:14:55 ------------------------------------------------------------------------------ 23:14:55 Healthcheck :: Verify policy pap health check | PASS | 23:14:55 ------------------------------------------------------------------------------ 23:15:16 Consolidated Healthcheck :: Verify policy consolidated health check | PASS | 23:15:16 ------------------------------------------------------------------------------ 23:15:16 Metrics :: Verify policy pap is exporting prometheus metrics | PASS | 23:15:16 ------------------------------------------------------------------------------ 23:15:17 AddPdpGroup :: Add a new PdpGroup named 'testGroup' in the policy ... | PASS | 23:15:17 ------------------------------------------------------------------------------ 23:15:17 QueryPdpGroupsBeforeActivation :: Verify PdpGroups before activation | PASS | 23:15:17 ------------------------------------------------------------------------------ 23:15:17 ActivatePdpGroup :: Change the state of PdpGroup named 'testGroup'... | PASS | 23:15:17 ------------------------------------------------------------------------------ 23:15:17 QueryPdpGroupsAfterActivation :: Verify PdpGroups after activation | PASS | 23:15:17 ------------------------------------------------------------------------------ 23:15:17 DeployPdpGroups :: Deploy policies in PdpGroups | PASS | 23:15:17 ------------------------------------------------------------------------------ 23:15:18 QueryPdpGroupsAfterDeploy :: Verify PdpGroups after undeploy | PASS | 23:15:18 ------------------------------------------------------------------------------ 23:15:18 QueryPolicyAuditAfterDeploy :: Verify policy audit record after de... | PASS | 23:15:18 ------------------------------------------------------------------------------ 23:15:18 QueryPolicyAuditWithMetadataSetAfterDeploy :: Verify policy audit ... | PASS | 23:15:18 ------------------------------------------------------------------------------ 23:15:18 UndeployPolicy :: Undeploy a policy named 'onap.restart.tca' from ... | PASS | 23:15:18 ------------------------------------------------------------------------------ 23:15:19 UndeployPolicyWithMetadataSet :: Undeploy a policy named 'operatio... | PASS | 23:15:19 ------------------------------------------------------------------------------ 23:15:19 QueryPdpGroupsAfterUndeploy :: Verify PdpGroups after undeploy | PASS | 23:15:19 ------------------------------------------------------------------------------ 23:15:39 QueryPolicyAuditAfterUnDeploy :: Verify policy audit record after ... | PASS | 23:15:39 ------------------------------------------------------------------------------ 23:15:39 QueryPolicyAuditWithMetadataSetAfterUnDeploy :: Verify policy audi... | PASS | 23:15:39 ------------------------------------------------------------------------------ 23:15:39 DeactivatePdpGroup :: Change the state of PdpGroup named 'testGrou... | PASS | 23:15:39 ------------------------------------------------------------------------------ 23:15:39 DeletePdpGroups :: Delete the PdpGroup named 'testGroup' from poli... | PASS | 23:15:39 ------------------------------------------------------------------------------ 23:15:40 QueryPdpGroupsAfterDelete :: Verify PdpGroups after delete | PASS | 23:15:40 ------------------------------------------------------------------------------ 23:15:40 pap.Pap-Test | PASS | 23:15:40 22 tests, 22 passed, 0 failed 23:15:40 ============================================================================== 23:15:40 pap.Pap-Slas 23:15:40 ============================================================================== 23:16:40 WaitForPrometheusServer :: Wait for Prometheus server to gather al... | PASS | 23:16:40 ------------------------------------------------------------------------------ 23:16:40 ValidateResponseTimeForHealthcheck :: Validate component healthche... | PASS | 23:16:40 ------------------------------------------------------------------------------ 23:16:40 ValidateResponseTimeForSystemHealthcheck :: Validate if system hea... | PASS | 23:16:40 ------------------------------------------------------------------------------ 23:16:40 ValidateResponseTimeQueryPolicyAudit :: Validate query audits resp... | PASS | 23:16:40 ------------------------------------------------------------------------------ 23:16:40 ValidateResponseTimeUpdateGroup :: Validate pdps/group response time | PASS | 23:16:40 ------------------------------------------------------------------------------ 23:16:40 ValidatePolicyDeploymentTime :: Check if deployment of policy is u... | PASS | 23:16:40 ------------------------------------------------------------------------------ 23:16:40 ValidateResponseTimeDeletePolicy :: Check if undeployment of polic... | PASS | 23:16:40 ------------------------------------------------------------------------------ 23:16:40 ValidateResponseTimeDeleteGroup :: Validate delete group response ... | PASS | 23:16:40 ------------------------------------------------------------------------------ 23:16:40 pap.Pap-Slas | PASS | 23:16:40 8 tests, 8 passed, 0 failed 23:16:40 ============================================================================== 23:16:40 pap | PASS | 23:16:40 30 tests, 30 passed, 0 failed 23:16:40 ============================================================================== 23:16:40 Output: /tmp/tmp.xIYONv2aFw/output.xml 23:16:40 Log: /tmp/tmp.xIYONv2aFw/log.html 23:16:40 Report: /tmp/tmp.xIYONv2aFw/report.html 23:16:40 + RESULT=0 23:16:40 + load_set 23:16:40 + _setopts=hxB 23:16:40 ++ echo braceexpand:hashall:interactive-comments:xtrace 23:16:40 ++ tr : ' ' 23:16:40 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:16:40 + set +o braceexpand 23:16:40 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:16:40 + set +o hashall 23:16:40 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:16:40 + set +o interactive-comments 23:16:40 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:16:40 + set +o xtrace 23:16:40 ++ echo hxB 23:16:40 ++ sed 's/./& /g' 23:16:40 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:16:40 + set +h 23:16:40 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:16:40 + set +x 23:16:40 + echo 'RESULT: 0' 23:16:40 RESULT: 0 23:16:40 + exit 0 23:16:40 + on_exit 23:16:40 + rc=0 23:16:40 + [[ -n /w/workspace/policy-pap-master-project-csit-pap ]] 23:16:40 + docker ps --format 'table {{ .Names }}\t{{ .Status }}' 23:16:40 NAMES STATUS 23:16:40 policy-apex-pdp Up 2 minutes 23:16:40 grafana Up 2 minutes 23:16:40 policy-pap Up 2 minutes 23:16:40 policy-api Up 2 minutes 23:16:40 kafka Up 2 minutes 23:16:40 compose_zookeeper_1 Up 2 minutes 23:16:40 mariadb Up 2 minutes 23:16:40 simulator Up 2 minutes 23:16:40 prometheus Up 2 minutes 23:16:40 + docker_stats 23:16:40 ++ uname -s 23:16:40 + '[' Linux == Darwin ']' 23:16:40 + sh -c 'top -bn1 | head -3' 23:16:40 top - 23:16:40 up 6 min, 0 users, load average: 0.91, 1.26, 0.61 23:16:40 Tasks: 199 total, 1 running, 129 sleeping, 0 stopped, 0 zombie 23:16:40 %Cpu(s): 11.3 us, 2.3 sy, 0.0 ni, 83.2 id, 3.2 wa, 0.0 hi, 0.1 si, 0.1 st 23:16:40 + echo 23:16:40 23:16:40 + sh -c 'free -h' 23:16:40 total used free shared buff/cache available 23:16:40 Mem: 31G 2.9G 22G 1.3M 6.4G 28G 23:16:40 Swap: 1.0G 0B 1.0G 23:16:40 + echo 23:16:40 23:16:40 + docker ps --format 'table {{ .Names }}\t{{ .Status }}' 23:16:40 NAMES STATUS 23:16:40 policy-apex-pdp Up 2 minutes 23:16:40 grafana Up 2 minutes 23:16:40 policy-pap Up 2 minutes 23:16:40 policy-api Up 2 minutes 23:16:40 kafka Up 2 minutes 23:16:40 compose_zookeeper_1 Up 2 minutes 23:16:40 mariadb Up 2 minutes 23:16:40 simulator Up 2 minutes 23:16:40 prometheus Up 2 minutes 23:16:40 + echo 23:16:40 23:16:40 + docker stats --no-stream 23:16:43 CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 23:16:43 8967f421c693 policy-apex-pdp 0.31% 186.3MiB / 31.41GiB 0.58% 56.4kB / 90.8kB 0B / 0B 52 23:16:43 acf9b5bde4c5 grafana 0.05% 53.12MiB / 31.41GiB 0.17% 19.6kB / 4.45kB 0B / 24.9MB 20 23:16:43 2c296206e5b6 policy-pap 8.09% 484.2MiB / 31.41GiB 1.51% 2.33MB / 819kB 0B / 153MB 65 23:16:43 23f9fce87df4 policy-api 0.10% 734.6MiB / 31.41GiB 2.28% 2.49MB / 1.29MB 0B / 0B 59 23:16:43 93ddc2c219aa kafka 11.62% 393.5MiB / 31.41GiB 1.22% 241kB / 216kB 0B / 606kB 85 23:16:43 4433b7ea6f05 compose_zookeeper_1 0.09% 101.5MiB / 31.41GiB 0.32% 60.6kB / 51.7kB 4.1kB / 393kB 60 23:16:43 f11edfef694c mariadb 0.02% 103.2MiB / 31.41GiB 0.32% 1.95MB / 4.77MB 11MB / 68.5MB 28 23:16:43 ca27f3deb272 simulator 0.14% 122.1MiB / 31.41GiB 0.38% 1.5kB / 0B 225kB / 0B 78 23:16:43 6c6ebfd82e7c prometheus 0.31% 25.11MiB / 31.41GiB 0.08% 219kB / 11.7kB 0B / 0B 10 23:16:43 + echo 23:16:43 23:16:43 + source_safely /w/workspace/policy-pap-master-project-csit-pap/compose/stop-compose.sh 23:16:43 + '[' -z /w/workspace/policy-pap-master-project-csit-pap/compose/stop-compose.sh ']' 23:16:43 + relax_set 23:16:43 + set +e 23:16:43 + set +o pipefail 23:16:43 + . /w/workspace/policy-pap-master-project-csit-pap/compose/stop-compose.sh 23:16:43 ++ echo 'Shut down started!' 23:16:43 Shut down started! 23:16:43 ++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' 23:16:43 ++ COMPOSE_FOLDER=/w/workspace/policy-pap-master-project-csit-pap/compose 23:16:43 ++ cd /w/workspace/policy-pap-master-project-csit-pap/compose 23:16:43 ++ source export-ports.sh 23:16:43 ++ source get-versions.sh 23:16:45 ++ echo 'Collecting logs from docker compose containers...' 23:16:45 Collecting logs from docker compose containers... 23:16:45 ++ docker-compose logs 23:16:46 ++ cat docker_compose.log 23:16:46 Attaching to policy-apex-pdp, grafana, policy-pap, policy-api, policy-db-migrator, kafka, compose_zookeeper_1, mariadb, simulator, prometheus 23:16:46 grafana | logger=settings t=2024-03-11T23:14:14.015210211Z level=info msg="Starting Grafana" version=10.4.0 commit=03f502a94d17f7dc4e6c34acdf8428aedd986e4c branch=HEAD compiled=2024-03-11T23:14:14Z 23:16:46 grafana | logger=settings t=2024-03-11T23:14:14.015487038Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini 23:16:46 grafana | logger=settings t=2024-03-11T23:14:14.015502528Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini 23:16:46 grafana | logger=settings t=2024-03-11T23:14:14.015506568Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana" 23:16:46 grafana | logger=settings t=2024-03-11T23:14:14.015510118Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana" 23:16:46 grafana | logger=settings t=2024-03-11T23:14:14.015517279Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins" 23:16:46 grafana | logger=settings t=2024-03-11T23:14:14.015521759Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning" 23:16:46 grafana | logger=settings t=2024-03-11T23:14:14.015525349Z level=info msg="Config overridden from command line" arg="default.log.mode=console" 23:16:46 grafana | logger=settings t=2024-03-11T23:14:14.015529689Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana" 23:16:46 grafana | logger=settings t=2024-03-11T23:14:14.015533769Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana" 23:16:46 grafana | logger=settings t=2024-03-11T23:14:14.015536509Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins" 23:16:46 grafana | logger=settings t=2024-03-11T23:14:14.015540959Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning" 23:16:46 grafana | logger=settings t=2024-03-11T23:14:14.015548279Z level=info msg=Target target=[all] 23:16:46 grafana | logger=settings t=2024-03-11T23:14:14.015554999Z level=info msg="Path Home" path=/usr/share/grafana 23:16:46 grafana | logger=settings t=2024-03-11T23:14:14.015733334Z level=info msg="Path Data" path=/var/lib/grafana 23:16:46 grafana | logger=settings t=2024-03-11T23:14:14.015740554Z level=info msg="Path Logs" path=/var/log/grafana 23:16:46 grafana | logger=settings t=2024-03-11T23:14:14.015743984Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins 23:16:46 grafana | logger=settings t=2024-03-11T23:14:14.015747274Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning 23:16:46 grafana | logger=settings t=2024-03-11T23:14:14.015753104Z level=info msg="App mode production" 23:16:46 grafana | logger=sqlstore t=2024-03-11T23:14:14.016072903Z level=info msg="Connecting to DB" dbtype=sqlite3 23:16:46 grafana | logger=sqlstore t=2024-03-11T23:14:14.016098333Z level=info msg="Creating SQLite database file" path=/var/lib/grafana/grafana.db 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.016860552Z level=info msg="Starting DB migrations" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.017807547Z level=info msg="Executing migration" id="create migration_log table" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.018671229Z level=info msg="Migration successfully executed" id="create migration_log table" duration=863.182µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.02304663Z level=info msg="Executing migration" id="create user table" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.023722978Z level=info msg="Migration successfully executed" id="create user table" duration=674.318µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.027291929Z level=info msg="Executing migration" id="add unique index user.login" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.027983896Z level=info msg="Migration successfully executed" id="add unique index user.login" duration=693.697µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.034245056Z level=info msg="Executing migration" id="add unique index user.email" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.035523339Z level=info msg="Migration successfully executed" id="add unique index user.email" duration=1.278653ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.039317946Z level=info msg="Executing migration" id="drop index UQE_user_login - v1" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.040075976Z level=info msg="Migration successfully executed" id="drop index UQE_user_login - v1" duration=757.42µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.043513483Z level=info msg="Executing migration" id="drop index UQE_user_email - v1" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.044253842Z level=info msg="Migration successfully executed" id="drop index UQE_user_email - v1" duration=740.279µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.05047548Z level=info msg="Executing migration" id="Rename table user to user_v1 - v1" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.053896737Z level=info msg="Migration successfully executed" id="Rename table user to user_v1 - v1" duration=3.422377ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.057437938Z level=info msg="Executing migration" id="create user table v2" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.058339101Z level=info msg="Migration successfully executed" id="create user table v2" duration=900.853µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.061902752Z level=info msg="Executing migration" id="create index UQE_user_login - v2" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.062719694Z level=info msg="Migration successfully executed" id="create index UQE_user_login - v2" duration=813.901µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.068295026Z level=info msg="Executing migration" id="create index UQE_user_email - v2" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.069107066Z level=info msg="Migration successfully executed" id="create index UQE_user_email - v2" duration=818.23µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.072614136Z level=info msg="Executing migration" id="copy data_source v1 to v2" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.073359864Z level=info msg="Migration successfully executed" id="copy data_source v1 to v2" duration=749.079µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.077057019Z level=info msg="Executing migration" id="Drop old table user_v1" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.077968992Z level=info msg="Migration successfully executed" id="Drop old table user_v1" duration=904.443µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.084063738Z level=info msg="Executing migration" id="Add column help_flags1 to user table" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.08529169Z level=info msg="Migration successfully executed" id="Add column help_flags1 to user table" duration=1.242032ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.089085966Z level=info msg="Executing migration" id="Update user table charset" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.089130967Z level=info msg="Migration successfully executed" id="Update user table charset" duration=46.451µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.092733269Z level=info msg="Executing migration" id="Add last_seen_at column to user" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.094454893Z level=info msg="Migration successfully executed" id="Add last_seen_at column to user" duration=1.720184ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.098138107Z level=info msg="Executing migration" id="Add missing user data" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.098455995Z level=info msg="Migration successfully executed" id="Add missing user data" duration=319.858µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.104315604Z level=info msg="Executing migration" id="Add is_disabled column to user" 23:16:46 zookeeper_1 | ===> User 23:16:46 zookeeper_1 | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) 23:16:46 zookeeper_1 | ===> Configuring ... 23:16:46 zookeeper_1 | ===> Running preflight checks ... 23:16:46 zookeeper_1 | ===> Check if /var/lib/zookeeper/data is writable ... 23:16:46 zookeeper_1 | ===> Check if /var/lib/zookeeper/log is writable ... 23:16:46 zookeeper_1 | ===> Launching ... 23:16:46 zookeeper_1 | ===> Launching zookeeper ... 23:16:46 zookeeper_1 | [2024-03-11 23:14:10,304] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:46 zookeeper_1 | [2024-03-11 23:14:10,311] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:46 zookeeper_1 | [2024-03-11 23:14:10,312] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:46 zookeeper_1 | [2024-03-11 23:14:10,312] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:46 zookeeper_1 | [2024-03-11 23:14:10,312] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:46 zookeeper_1 | [2024-03-11 23:14:10,313] INFO autopurge.snapRetainCount set to 3 (org.apache.zookeeper.server.DatadirCleanupManager) 23:16:46 zookeeper_1 | [2024-03-11 23:14:10,313] INFO autopurge.purgeInterval set to 0 (org.apache.zookeeper.server.DatadirCleanupManager) 23:16:46 zookeeper_1 | [2024-03-11 23:14:10,313] INFO Purge task is not scheduled. (org.apache.zookeeper.server.DatadirCleanupManager) 23:16:46 zookeeper_1 | [2024-03-11 23:14:10,313] WARN Either no config or no quorum defined in config, running in standalone mode (org.apache.zookeeper.server.quorum.QuorumPeerMain) 23:16:46 zookeeper_1 | [2024-03-11 23:14:10,315] INFO Log4j 1.2 jmx support not found; jmx disabled. (org.apache.zookeeper.jmx.ManagedUtil) 23:16:46 zookeeper_1 | [2024-03-11 23:14:10,315] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:46 zookeeper_1 | [2024-03-11 23:14:10,315] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:46 zookeeper_1 | [2024-03-11 23:14:10,316] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:46 zookeeper_1 | [2024-03-11 23:14:10,316] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:46 zookeeper_1 | [2024-03-11 23:14:10,316] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:46 zookeeper_1 | [2024-03-11 23:14:10,316] INFO Starting server (org.apache.zookeeper.server.ZooKeeperServerMain) 23:16:46 zookeeper_1 | [2024-03-11 23:14:10,328] INFO ServerMetrics initialized with provider org.apache.zookeeper.metrics.impl.DefaultMetricsProvider@7690781 (org.apache.zookeeper.server.ServerMetrics) 23:16:46 zookeeper_1 | [2024-03-11 23:14:10,330] INFO ACL digest algorithm is: SHA1 (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) 23:16:46 zookeeper_1 | [2024-03-11 23:14:10,330] INFO zookeeper.DigestAuthenticationProvider.enabled = true (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) 23:16:46 zookeeper_1 | [2024-03-11 23:14:10,333] INFO zookeeper.snapshot.trust.empty : false (org.apache.zookeeper.server.persistence.FileTxnSnapLog) 23:16:46 zookeeper_1 | [2024-03-11 23:14:10,343] INFO (org.apache.zookeeper.server.ZooKeeperServer) 23:16:46 zookeeper_1 | [2024-03-11 23:14:10,343] INFO ______ _ (org.apache.zookeeper.server.ZooKeeperServer) 23:16:46 zookeeper_1 | [2024-03-11 23:14:10,343] INFO |___ / | | (org.apache.zookeeper.server.ZooKeeperServer) 23:16:46 zookeeper_1 | [2024-03-11 23:14:10,343] INFO / / ___ ___ | | __ ___ ___ _ __ ___ _ __ (org.apache.zookeeper.server.ZooKeeperServer) 23:16:46 zookeeper_1 | [2024-03-11 23:14:10,343] INFO / / / _ \ / _ \ | |/ / / _ \ / _ \ | '_ \ / _ \ | '__| (org.apache.zookeeper.server.ZooKeeperServer) 23:16:46 zookeeper_1 | [2024-03-11 23:14:10,343] INFO / /__ | (_) | | (_) | | < | __/ | __/ | |_) | | __/ | | (org.apache.zookeeper.server.ZooKeeperServer) 23:16:46 zookeeper_1 | [2024-03-11 23:14:10,343] INFO /_____| \___/ \___/ |_|\_\ \___| \___| | .__/ \___| |_| (org.apache.zookeeper.server.ZooKeeperServer) 23:16:46 zookeeper_1 | [2024-03-11 23:14:10,343] INFO | | (org.apache.zookeeper.server.ZooKeeperServer) 23:16:46 zookeeper_1 | [2024-03-11 23:14:10,343] INFO |_| (org.apache.zookeeper.server.ZooKeeperServer) 23:16:46 zookeeper_1 | [2024-03-11 23:14:10,343] INFO (org.apache.zookeeper.server.ZooKeeperServer) 23:16:46 zookeeper_1 | [2024-03-11 23:14:10,345] INFO Server environment:zookeeper.version=3.8.3-6ad6d364c7c0bcf0de452d54ebefa3058098ab56, built on 2023-10-05 10:34 UTC (org.apache.zookeeper.server.ZooKeeperServer) 23:16:46 zookeeper_1 | [2024-03-11 23:14:10,345] INFO Server environment:host.name=4433b7ea6f05 (org.apache.zookeeper.server.ZooKeeperServer) 23:16:46 zookeeper_1 | [2024-03-11 23:14:10,345] INFO Server environment:java.version=11.0.21 (org.apache.zookeeper.server.ZooKeeperServer) 23:16:46 zookeeper_1 | [2024-03-11 23:14:10,345] INFO Server environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.server.ZooKeeperServer) 23:16:46 zookeeper_1 | [2024-03-11 23:14:10,345] INFO Server environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.server.ZooKeeperServer) 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.105769512Z level=info msg="Migration successfully executed" id="Add is_disabled column to user" duration=1.451698ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.109511497Z level=info msg="Executing migration" id="Add index user.login/user.email" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.110655947Z level=info msg="Migration successfully executed" id="Add index user.login/user.email" duration=1.14446ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.11430251Z level=info msg="Executing migration" id="Add is_service_account column to user" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.115458579Z level=info msg="Migration successfully executed" id="Add is_service_account column to user" duration=1.156159ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.153300326Z level=info msg="Executing migration" id="Update is_service_account column to nullable" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.161441093Z level=info msg="Migration successfully executed" id="Update is_service_account column to nullable" duration=8.140678ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.174015714Z level=info msg="Executing migration" id="Add uid column to user" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.175298387Z level=info msg="Migration successfully executed" id="Add uid column to user" duration=1.282923ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.185956619Z level=info msg="Executing migration" id="Update uid column values for users" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.186429131Z level=info msg="Migration successfully executed" id="Update uid column values for users" duration=475.832µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.198239993Z level=info msg="Executing migration" id="Add unique index user_uid" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.199492084Z level=info msg="Migration successfully executed" id="Add unique index user_uid" duration=1.252071ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.210766772Z level=info msg="Executing migration" id="create temp user table v1-7" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.212181018Z level=info msg="Migration successfully executed" id="create temp user table v1-7" duration=1.413776ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.22752181Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v1-7" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.228922986Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v1-7" duration=1.404086ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.236795717Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v1-7" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.237993837Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v1-7" duration=1.19336ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.246962666Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v1-7" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.248169948Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v1-7" duration=1.206832ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.293125265Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v1-7" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.294450708Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v1-7" duration=1.321273ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.298760849Z level=info msg="Executing migration" id="Update temp_user table charset" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.29878788Z level=info msg="Migration successfully executed" id="Update temp_user table charset" duration=28.191µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.302691359Z level=info msg="Executing migration" id="drop index IDX_temp_user_email - v1" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.30348852Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_email - v1" duration=803.161µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.310021386Z level=info msg="Executing migration" id="drop index IDX_temp_user_org_id - v1" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.311048062Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_org_id - v1" duration=1.030566ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.317977529Z level=info msg="Executing migration" id="drop index IDX_temp_user_code - v1" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.319219101Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_code - v1" duration=1.256832ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.323983333Z level=info msg="Executing migration" id="drop index IDX_temp_user_status - v1" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.32505371Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_status - v1" duration=1.071327ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.331473194Z level=info msg="Executing migration" id="Rename table temp_user to temp_user_tmp_qwerty - v1" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.334705987Z level=info msg="Migration successfully executed" id="Rename table temp_user to temp_user_tmp_qwerty - v1" duration=3.232373ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.338666387Z level=info msg="Executing migration" id="create temp_user v2" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.339526169Z level=info msg="Migration successfully executed" id="create temp_user v2" duration=859.472µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.343503921Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v2" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.344392703Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v2" duration=889.042µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.348162259Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v2" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.348907359Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v2" duration=745.36µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.353958708Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v2" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.354659716Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v2" duration=700.888µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.358326679Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v2" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.359016737Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v2" duration=689.718µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.361939281Z level=info msg="Executing migration" id="copy temp_user v1 to v2" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.362291581Z level=info msg="Migration successfully executed" id="copy temp_user v1 to v2" duration=352.22µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.368554941Z level=info msg="Executing migration" id="drop temp_user_tmp_qwerty" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.369022032Z level=info msg="Migration successfully executed" id="drop temp_user_tmp_qwerty" duration=466.842µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.372768598Z level=info msg="Executing migration" id="Set created for temp users that will otherwise prematurely expire" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.373507137Z level=info msg="Migration successfully executed" id="Set created for temp users that will otherwise prematurely expire" duration=738.56µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.377419477Z level=info msg="Executing migration" id="create star table" 23:16:46 zookeeper_1 | [2024-03-11 23:14:10,345] INFO Server environment:java.class.path=/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/jose4j-0.9.3.jar:/usr/bin/../share/java/kafka/commons-validator-1.7.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.100.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/commons-io-2.11.0.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/connect-mirror-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.11.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.100.Final.jar:/usr/bin/../share/java/kafka/jline-3.22.0.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-tools-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/connect-json-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.100.Final.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.5-1.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/scala-library-2.13.11.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/reflections-0.10.2.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/commons-beanutils-1.9.4.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/connect-transforms-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/connect-api-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/commons-digester-2.1.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-raft-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.8.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/checker-qual-3.19.0.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/pcollections-4.0.1.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-clients-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/jsr305-3.0.2.jar:/usr/bin/../share/java/kafka/trogdor-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-shell-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.100.Final.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.3.jar:/usr/bin/../share/java/kafka/connect-runtime-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-storage-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.100.Final.jar:/usr/bin/../share/java/kafka/commons-collections-3.2.2.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/caffeine-2.9.3.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.1.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.3.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/netty-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.53.v20231009.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.server.ZooKeeperServer) 23:16:46 zookeeper_1 | [2024-03-11 23:14:10,345] INFO Server environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.server.ZooKeeperServer) 23:16:46 zookeeper_1 | [2024-03-11 23:14:10,346] INFO Server environment:java.io.tmpdir=/tmp (org.apache.zookeeper.server.ZooKeeperServer) 23:16:46 zookeeper_1 | [2024-03-11 23:14:10,346] INFO Server environment:java.compiler= (org.apache.zookeeper.server.ZooKeeperServer) 23:16:46 zookeeper_1 | [2024-03-11 23:14:10,346] INFO Server environment:os.name=Linux (org.apache.zookeeper.server.ZooKeeperServer) 23:16:46 zookeeper_1 | [2024-03-11 23:14:10,346] INFO Server environment:os.arch=amd64 (org.apache.zookeeper.server.ZooKeeperServer) 23:16:46 zookeeper_1 | [2024-03-11 23:14:10,346] INFO Server environment:os.version=4.15.0-192-generic (org.apache.zookeeper.server.ZooKeeperServer) 23:16:46 zookeeper_1 | [2024-03-11 23:14:10,346] INFO Server environment:user.name=appuser (org.apache.zookeeper.server.ZooKeeperServer) 23:16:46 zookeeper_1 | [2024-03-11 23:14:10,346] INFO Server environment:user.home=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) 23:16:46 zookeeper_1 | [2024-03-11 23:14:10,346] INFO Server environment:user.dir=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) 23:16:46 zookeeper_1 | [2024-03-11 23:14:10,346] INFO Server environment:os.memory.free=490MB (org.apache.zookeeper.server.ZooKeeperServer) 23:16:46 zookeeper_1 | [2024-03-11 23:14:10,346] INFO Server environment:os.memory.max=512MB (org.apache.zookeeper.server.ZooKeeperServer) 23:16:46 zookeeper_1 | [2024-03-11 23:14:10,346] INFO Server environment:os.memory.total=512MB (org.apache.zookeeper.server.ZooKeeperServer) 23:16:46 zookeeper_1 | [2024-03-11 23:14:10,346] INFO zookeeper.enableEagerACLCheck = false (org.apache.zookeeper.server.ZooKeeperServer) 23:16:46 zookeeper_1 | [2024-03-11 23:14:10,346] INFO zookeeper.digest.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) 23:16:46 zookeeper_1 | [2024-03-11 23:14:10,346] INFO zookeeper.closeSessionTxn.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) 23:16:46 zookeeper_1 | [2024-03-11 23:14:10,346] INFO zookeeper.flushDelay = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) 23:16:46 zookeeper_1 | [2024-03-11 23:14:10,346] INFO zookeeper.maxWriteQueuePollTime = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) 23:16:46 zookeeper_1 | [2024-03-11 23:14:10,346] INFO zookeeper.maxBatchSize=1000 (org.apache.zookeeper.server.ZooKeeperServer) 23:16:46 zookeeper_1 | [2024-03-11 23:14:10,346] INFO zookeeper.intBufferStartingSizeBytes = 1024 (org.apache.zookeeper.server.ZooKeeperServer) 23:16:46 zookeeper_1 | [2024-03-11 23:14:10,347] INFO Weighed connection throttling is disabled (org.apache.zookeeper.server.BlueThrottle) 23:16:46 zookeeper_1 | [2024-03-11 23:14:10,348] INFO minSessionTimeout set to 4000 ms (org.apache.zookeeper.server.ZooKeeperServer) 23:16:46 zookeeper_1 | [2024-03-11 23:14:10,348] INFO maxSessionTimeout set to 40000 ms (org.apache.zookeeper.server.ZooKeeperServer) 23:16:46 zookeeper_1 | [2024-03-11 23:14:10,349] INFO getData response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) 23:16:46 zookeeper_1 | [2024-03-11 23:14:10,349] INFO getChildren response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) 23:16:46 zookeeper_1 | [2024-03-11 23:14:10,350] INFO zookeeper.pathStats.slotCapacity = 60 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 23:16:46 zookeeper_1 | [2024-03-11 23:14:10,350] INFO zookeeper.pathStats.slotDuration = 15 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 23:16:46 zookeeper_1 | [2024-03-11 23:14:10,350] INFO zookeeper.pathStats.maxDepth = 6 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 23:16:46 zookeeper_1 | [2024-03-11 23:14:10,350] INFO zookeeper.pathStats.initialDelay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 23:16:46 zookeeper_1 | [2024-03-11 23:14:10,350] INFO zookeeper.pathStats.delay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.378515725Z level=info msg="Migration successfully executed" id="create star table" duration=1.096037ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.381882031Z level=info msg="Executing migration" id="add unique index star.user_id_dashboard_id" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.38263507Z level=info msg="Migration successfully executed" id="add unique index star.user_id_dashboard_id" duration=752.799µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.38771917Z level=info msg="Executing migration" id="create org table v1" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.388744036Z level=info msg="Migration successfully executed" id="create org table v1" duration=1.024836ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.392656855Z level=info msg="Executing migration" id="create index UQE_org_name - v1" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.39319744Z level=info msg="Migration successfully executed" id="create index UQE_org_name - v1" duration=540.385µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.396838032Z level=info msg="Executing migration" id="create org_user table v1" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.397511969Z level=info msg="Migration successfully executed" id="create org_user table v1" duration=673.697µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.403955114Z level=info msg="Executing migration" id="create index IDX_org_user_org_id - v1" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.404750855Z level=info msg="Migration successfully executed" id="create index IDX_org_user_org_id - v1" duration=795.711µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.465011283Z level=info msg="Executing migration" id="create index UQE_org_user_org_id_user_id - v1" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.467601529Z level=info msg="Migration successfully executed" id="create index UQE_org_user_org_id_user_id - v1" duration=2.590187ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.474329331Z level=info msg="Executing migration" id="create index IDX_org_user_user_id - v1" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.475510341Z level=info msg="Migration successfully executed" id="create index IDX_org_user_user_id - v1" duration=1.177109ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.482379556Z level=info msg="Executing migration" id="Update org table charset" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.48255875Z level=info msg="Migration successfully executed" id="Update org table charset" duration=176.964µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.487943388Z level=info msg="Executing migration" id="Update org_user table charset" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.487972519Z level=info msg="Migration successfully executed" id="Update org_user table charset" duration=29.311µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.491469949Z level=info msg="Executing migration" id="Migrate all Read Only Viewers to Viewers" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.491826027Z level=info msg="Migration successfully executed" id="Migrate all Read Only Viewers to Viewers" duration=355.599µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.497614945Z level=info msg="Executing migration" id="create dashboard table" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.499096882Z level=info msg="Migration successfully executed" id="create dashboard table" duration=1.480527ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.571040749Z level=info msg="Executing migration" id="add index dashboard.account_id" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.57379356Z level=info msg="Migration successfully executed" id="add index dashboard.account_id" duration=2.772711ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.581114107Z level=info msg="Executing migration" id="add unique index dashboard_account_id_slug" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.582133533Z level=info msg="Migration successfully executed" id="add unique index dashboard_account_id_slug" duration=1.026346ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.589878681Z level=info msg="Executing migration" id="create dashboard_tag table" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.590840725Z level=info msg="Migration successfully executed" id="create dashboard_tag table" duration=963.514µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.597465094Z level=info msg="Executing migration" id="add unique index dashboard_tag.dasboard_id_term" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.598308656Z level=info msg="Migration successfully executed" id="add unique index dashboard_tag.dasboard_id_term" duration=845.983µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.604096713Z level=info msg="Executing migration" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.605044947Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" duration=950.374µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.611267986Z level=info msg="Executing migration" id="Rename table dashboard to dashboard_v1 - v1" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.616142271Z level=info msg="Migration successfully executed" id="Rename table dashboard to dashboard_v1 - v1" duration=4.873595ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.620942834Z level=info msg="Executing migration" id="create dashboard v2" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.621924878Z level=info msg="Migration successfully executed" id="create dashboard v2" duration=982.225µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.626422453Z level=info msg="Executing migration" id="create index IDX_dashboard_org_id - v2" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.627263724Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_org_id - v2" duration=834.171µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.633069643Z level=info msg="Executing migration" id="create index UQE_dashboard_org_id_slug - v2" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.633933625Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_org_id_slug - v2" duration=863.473µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.64118355Z level=info msg="Executing migration" id="copy dashboard v1 to v2" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.641904888Z level=info msg="Migration successfully executed" id="copy dashboard v1 to v2" duration=721.968µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.648802125Z level=info msg="Executing migration" id="drop table dashboard_v1" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.64980818Z level=info msg="Migration successfully executed" id="drop table dashboard_v1" duration=1.007775ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.659259642Z level=info msg="Executing migration" id="alter dashboard.data to mediumtext v1" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.659382525Z level=info msg="Migration successfully executed" id="alter dashboard.data to mediumtext v1" duration=126.823µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.664825524Z level=info msg="Executing migration" id="Add column updated_by in dashboard - v2" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.667413549Z level=info msg="Migration successfully executed" id="Add column updated_by in dashboard - v2" duration=2.597625ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.673085044Z level=info msg="Executing migration" id="Add column created_by in dashboard - v2" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.675345142Z level=info msg="Migration successfully executed" id="Add column created_by in dashboard - v2" duration=2.259688ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.683225163Z level=info msg="Executing migration" id="Add column gnetId in dashboard" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.684785803Z level=info msg="Migration successfully executed" id="Add column gnetId in dashboard" duration=1.56309ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.689739349Z level=info msg="Executing migration" id="Add index for gnetId in dashboard" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.690500469Z level=info msg="Migration successfully executed" id="Add index for gnetId in dashboard" duration=761.23µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.696977864Z level=info msg="Executing migration" id="Add column plugin_id in dashboard" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.698770911Z level=info msg="Migration successfully executed" id="Add column plugin_id in dashboard" duration=1.792886ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.705756248Z level=info msg="Executing migration" id="Add index for plugin_id in dashboard" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.706569759Z level=info msg="Migration successfully executed" id="Add index for plugin_id in dashboard" duration=810.091µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.712340916Z level=info msg="Executing migration" id="Add index for dashboard_id in dashboard_tag" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.713092916Z level=info msg="Migration successfully executed" id="Add index for dashboard_id in dashboard_tag" duration=751.95µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.72265368Z level=info msg="Executing migration" id="Update dashboard table charset" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.722703991Z level=info msg="Migration successfully executed" id="Update dashboard table charset" duration=54.131µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.728907309Z level=info msg="Executing migration" id="Update dashboard_tag table charset" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.72896075Z level=info msg="Migration successfully executed" id="Update dashboard_tag table charset" duration=57.331µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.732466061Z level=info msg="Executing migration" id="Add column folder_id in dashboard" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.734510492Z level=info msg="Migration successfully executed" id="Add column folder_id in dashboard" duration=2.043812ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.738050053Z level=info msg="Executing migration" id="Add column isFolder in dashboard" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.739409777Z level=info msg="Migration successfully executed" id="Add column isFolder in dashboard" duration=1.350804ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.744260842Z level=info msg="Executing migration" id="Add column has_acl in dashboard" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.746188031Z level=info msg="Migration successfully executed" id="Add column has_acl in dashboard" duration=1.920699ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.751444844Z level=info msg="Executing migration" id="Add column uid in dashboard" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.753835835Z level=info msg="Migration successfully executed" id="Add column uid in dashboard" duration=2.394121ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.75871881Z level=info msg="Executing migration" id="Update uid column values in dashboard" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.758935496Z level=info msg="Migration successfully executed" id="Update uid column values in dashboard" duration=216.886µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.765164675Z level=info msg="Executing migration" id="Add unique index dashboard_org_id_uid" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.765896894Z level=info msg="Migration successfully executed" id="Add unique index dashboard_org_id_uid" duration=732.25µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.769706361Z level=info msg="Executing migration" id="Remove unique index org_id_slug" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.770404379Z level=info msg="Migration successfully executed" id="Remove unique index org_id_slug" duration=704.408µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.773687032Z level=info msg="Executing migration" id="Update dashboard title length" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.773709523Z level=info msg="Migration successfully executed" id="Update dashboard title length" duration=23.631µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.780139258Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_title_folder_id" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.780942028Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_title_folder_id" duration=802.74µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.785177205Z level=info msg="Executing migration" id="create dashboard_provisioning" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.785856233Z level=info msg="Migration successfully executed" id="create dashboard_provisioning" duration=679.008µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.789178698Z level=info msg="Executing migration" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.79434509Z level=info msg="Migration successfully executed" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" duration=5.166152ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.801212215Z level=info msg="Executing migration" id="create dashboard_provisioning v2" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.80217741Z level=info msg="Migration successfully executed" id="create dashboard_provisioning v2" duration=974.665µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.807038154Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id - v2" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.808766938Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id - v2" duration=1.728824ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.812536424Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.813751945Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" duration=1.215191ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.817344907Z level=info msg="Executing migration" id="copy dashboard_provisioning v1 to v2" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.817559072Z level=info msg="Migration successfully executed" id="copy dashboard_provisioning v1 to v2" duration=214.425µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.824760717Z level=info msg="Executing migration" id="drop dashboard_provisioning_tmp_qwerty" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.82529399Z level=info msg="Migration successfully executed" id="drop dashboard_provisioning_tmp_qwerty" duration=533.283µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.829652221Z level=info msg="Executing migration" id="Add check_sum column" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.832124704Z level=info msg="Migration successfully executed" id="Add check_sum column" duration=2.475373ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.835732297Z level=info msg="Executing migration" id="Add index for dashboard_title" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.836927757Z level=info msg="Migration successfully executed" id="Add index for dashboard_title" duration=1.1958ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.8401924Z level=info msg="Executing migration" id="delete tags for deleted dashboards" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.840394315Z level=info msg="Migration successfully executed" id="delete tags for deleted dashboards" duration=202.935µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.845724121Z level=info msg="Executing migration" id="delete stars for deleted dashboards" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.845924126Z level=info msg="Migration successfully executed" id="delete stars for deleted dashboards" duration=199.675µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.848350749Z level=info msg="Executing migration" id="Add index for dashboard_is_folder" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.84918223Z level=info msg="Migration successfully executed" id="Add index for dashboard_is_folder" duration=835.581µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.852960547Z level=info msg="Executing migration" id="Add isPublic for dashboard" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.85506789Z level=info msg="Migration successfully executed" id="Add isPublic for dashboard" duration=2.106733ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.860851488Z level=info msg="Executing migration" id="create data_source table" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.862210812Z level=info msg="Migration successfully executed" id="create data_source table" duration=1.367294ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.865601019Z level=info msg="Executing migration" id="add index data_source.account_id" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.866482482Z level=info msg="Migration successfully executed" id="add index data_source.account_id" duration=885.734µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.869836267Z level=info msg="Executing migration" id="add unique index data_source.account_id_name" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.870674828Z level=info msg="Migration successfully executed" id="add unique index data_source.account_id_name" duration=838.351µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.876151968Z level=info msg="Executing migration" id="drop index IDX_data_source_account_id - v1" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.877361679Z level=info msg="Migration successfully executed" id="drop index IDX_data_source_account_id - v1" duration=1.208371ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.882357307Z level=info msg="Executing migration" id="drop index UQE_data_source_account_id_name - v1" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.883500396Z level=info msg="Migration successfully executed" id="drop index UQE_data_source_account_id_name - v1" duration=1.14309ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.890381591Z level=info msg="Executing migration" id="Rename table data_source to data_source_v1 - v1" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.89543455Z level=info msg="Migration successfully executed" id="Rename table data_source to data_source_v1 - v1" duration=5.053229ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.898685223Z level=info msg="Executing migration" id="create data_source table v2" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.899549945Z level=info msg="Migration successfully executed" id="create data_source table v2" duration=867.012µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.904385559Z level=info msg="Executing migration" id="create index IDX_data_source_org_id - v2" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.90521257Z level=info msg="Migration successfully executed" id="create index IDX_data_source_org_id - v2" duration=826.581µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.911159401Z level=info msg="Executing migration" id="create index UQE_data_source_org_id_name - v2" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.912542097Z level=info msg="Migration successfully executed" id="create index UQE_data_source_org_id_name - v2" duration=1.381766ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.976683675Z level=info msg="Executing migration" id="Drop old table data_source_v1 #2" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.97766062Z level=info msg="Migration successfully executed" id="Drop old table data_source_v1 #2" duration=976.664µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.981345904Z level=info msg="Executing migration" id="Add column with_credentials" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.985051028Z level=info msg="Migration successfully executed" id="Add column with_credentials" duration=3.704064ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.990747884Z level=info msg="Executing migration" id="Add secure json data column" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.993051423Z level=info msg="Migration successfully executed" id="Add secure json data column" duration=2.303019ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.996461379Z level=info msg="Executing migration" id="Update data_source table charset" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:14.99649734Z level=info msg="Migration successfully executed" id="Update data_source table charset" duration=32.901µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.002474693Z level=info msg="Executing migration" id="Update initial version to 1" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.002705489Z level=info msg="Migration successfully executed" id="Update initial version to 1" duration=222.105µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.006061694Z level=info msg="Executing migration" id="Add read_only data column" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.009979005Z level=info msg="Migration successfully executed" id="Add read_only data column" duration=3.917481ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.015913766Z level=info msg="Executing migration" id="Migrate logging ds to loki ds" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.016208984Z level=info msg="Migration successfully executed" id="Migrate logging ds to loki ds" duration=295.558µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.019256001Z level=info msg="Executing migration" id="Update json_data with nulls" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.019410655Z level=info msg="Migration successfully executed" id="Update json_data with nulls" duration=154.754µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.022695559Z level=info msg="Executing migration" id="Add uid column" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.025011848Z level=info msg="Migration successfully executed" id="Add uid column" duration=2.315849ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.030773225Z level=info msg="Executing migration" id="Update uid value" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.030982041Z level=info msg="Migration successfully executed" id="Update uid value" duration=208.906µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.034119121Z level=info msg="Executing migration" id="Add unique index datasource_org_id_uid" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.035339062Z level=info msg="Migration successfully executed" id="Add unique index datasource_org_id_uid" duration=1.219351ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.038717788Z level=info msg="Executing migration" id="add unique index datasource_org_id_is_default" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.039949619Z level=info msg="Migration successfully executed" id="add unique index datasource_org_id_is_default" duration=1.230981ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.043210053Z level=info msg="Executing migration" id="create api_key table" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.044020254Z level=info msg="Migration successfully executed" id="create api_key table" duration=809.611µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.049431312Z level=info msg="Executing migration" id="add index api_key.account_id" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.050197081Z level=info msg="Migration successfully executed" id="add index api_key.account_id" duration=765.469µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.053439234Z level=info msg="Executing migration" id="add index api_key.key" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.054181534Z level=info msg="Migration successfully executed" id="add index api_key.key" duration=741.97µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.057377345Z level=info msg="Executing migration" id="add index api_key.account_id_name" 23:16:46 kafka | ===> User 23:16:46 kafka | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) 23:16:46 kafka | ===> Configuring ... 23:16:46 kafka | Running in Zookeeper mode... 23:16:46 kafka | ===> Running preflight checks ... 23:16:46 kafka | ===> Check if /var/lib/kafka/data is writable ... 23:16:46 kafka | ===> Check if Zookeeper is healthy ... 23:16:46 kafka | SLF4J: Class path contains multiple SLF4J bindings. 23:16:46 kafka | SLF4J: Found binding in [jar:file:/usr/share/java/kafka/slf4j-reload4j-1.7.36.jar!/org/slf4j/impl/StaticLoggerBinder.class] 23:16:46 kafka | SLF4J: Found binding in [jar:file:/usr/share/java/cp-base-new/slf4j-reload4j-1.7.36.jar!/org/slf4j/impl/StaticLoggerBinder.class] 23:16:46 kafka | SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. 23:16:46 kafka | SLF4J: Actual binding is of type [org.slf4j.impl.Reload4jLoggerFactory] 23:16:46 kafka | [2024-03-11 23:14:11,716] INFO Client environment:zookeeper.version=3.8.3-6ad6d364c7c0bcf0de452d54ebefa3058098ab56, built on 2023-10-05 10:34 UTC (org.apache.zookeeper.ZooKeeper) 23:16:46 kafka | [2024-03-11 23:14:11,717] INFO Client environment:host.name=93ddc2c219aa (org.apache.zookeeper.ZooKeeper) 23:16:46 kafka | [2024-03-11 23:14:11,717] INFO Client environment:java.version=11.0.21 (org.apache.zookeeper.ZooKeeper) 23:16:46 kafka | [2024-03-11 23:14:11,717] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) 23:16:46 kafka | [2024-03-11 23:14:11,717] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) 23:16:46 mariadb | 2024-03-11 23:14:08+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.10.2+maria~ubu2204 started. 23:16:46 mariadb | 2024-03-11 23:14:08+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql' 23:16:46 mariadb | 2024-03-11 23:14:08+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.10.2+maria~ubu2204 started. 23:16:46 mariadb | 2024-03-11 23:14:08+00:00 [Note] [Entrypoint]: Initializing database files 23:16:46 mariadb | 2024-03-11 23:14:08 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) 23:16:46 mariadb | 2024-03-11 23:14:08 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF 23:16:46 mariadb | 2024-03-11 23:14:08 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. 23:16:46 mariadb | 23:16:46 mariadb | 23:16:46 mariadb | PLEASE REMEMBER TO SET A PASSWORD FOR THE MariaDB root USER ! 23:16:46 mariadb | To do so, start the server, then issue the following command: 23:16:46 mariadb | 23:16:46 mariadb | '/usr/bin/mysql_secure_installation' 23:16:46 mariadb | 23:16:46 mariadb | which will also give you the option of removing the test 23:16:46 mariadb | databases and anonymous user created by default. This is 23:16:46 mariadb | strongly recommended for production servers. 23:16:46 mariadb | 23:16:46 mariadb | See the MariaDB Knowledgebase at https://mariadb.com/kb 23:16:46 mariadb | 23:16:46 mariadb | Please report any problems at https://mariadb.org/jira 23:16:46 mariadb | 23:16:46 mariadb | The latest information about MariaDB is available at https://mariadb.org/. 23:16:46 mariadb | 23:16:46 mariadb | Consider joining MariaDB's strong and vibrant community: 23:16:46 mariadb | https://mariadb.org/get-involved/ 23:16:46 mariadb | 23:16:46 mariadb | 2024-03-11 23:14:10+00:00 [Note] [Entrypoint]: Database files initialized 23:16:46 mariadb | 2024-03-11 23:14:10+00:00 [Note] [Entrypoint]: Starting temporary server 23:16:46 mariadb | 2024-03-11 23:14:10+00:00 [Note] [Entrypoint]: Waiting for server startup 23:16:46 mariadb | 2024-03-11 23:14:10 0 [Note] mariadbd (server 10.10.2-MariaDB-1:10.10.2+maria~ubu2204) starting as process 98 ... 23:16:46 mariadb | 2024-03-11 23:14:10 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 23:16:46 mariadb | 2024-03-11 23:14:10 0 [Note] InnoDB: Number of transaction pools: 1 23:16:46 mariadb | 2024-03-11 23:14:10 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions 23:16:46 mariadb | 2024-03-11 23:14:10 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts) 23:16:46 mariadb | 2024-03-11 23:14:10 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) 23:16:46 mariadb | 2024-03-11 23:14:10 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF 23:16:46 mariadb | 2024-03-11 23:14:10 0 [Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB 23:16:46 mariadb | 2024-03-11 23:14:10 0 [Note] InnoDB: Completed initialization of buffer pool 23:16:46 mariadb | 2024-03-11 23:14:10 0 [Note] InnoDB: File system buffers for log disabled (block size=512 bytes) 23:16:46 mariadb | 2024-03-11 23:14:10 0 [Note] InnoDB: 128 rollback segments are active. 23:16:46 mariadb | 2024-03-11 23:14:10 0 [Note] InnoDB: Setting file './ibtmp1' size to 12.000MiB. Physically writing the file full; Please wait ... 23:16:46 mariadb | 2024-03-11 23:14:10 0 [Note] InnoDB: File './ibtmp1' size is now 12.000MiB. 23:16:46 kafka | [2024-03-11 23:14:11,717] INFO Client environment:java.class.path=/usr/share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/share/java/kafka/jersey-common-2.39.1.jar:/usr/share/java/kafka/swagger-annotations-2.2.8.jar:/usr/share/java/kafka/jose4j-0.9.3.jar:/usr/share/java/kafka/commons-validator-1.7.jar:/usr/share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/share/java/kafka/netty-transport-4.1.100.Final.jar:/usr/share/java/kafka/rocksdbjni-7.9.2.jar:/usr/share/java/kafka/jackson-annotations-2.13.5.jar:/usr/share/java/kafka/commons-io-2.11.0.jar:/usr/share/java/kafka/javax.activation-api-1.2.0.jar:/usr/share/java/kafka/connect-mirror-7.6.0-ccs.jar:/usr/share/java/kafka/commons-cli-1.4.jar:/usr/share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/share/java/kafka/scala-reflect-2.13.11.jar:/usr/share/java/kafka/netty-buffer-4.1.100.Final.jar:/usr/share/java/kafka/jline-3.22.0.jar:/usr/share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/share/java/kafka/kafka-server-common-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-tools-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-log4j-appender-7.6.0-ccs.jar:/usr/share/java/kafka/connect-json-7.6.0-ccs.jar:/usr/share/java/kafka/netty-resolver-4.1.100.Final.jar:/usr/share/java/kafka/hk2-api-2.6.1.jar:/usr/share/java/kafka/zstd-jni-1.5.5-1.jar:/usr/share/java/kafka/jetty-http-9.4.53.v20231009.jar:/usr/share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/share/java/kafka/kafka.jar:/usr/share/java/kafka/jetty-servlets-9.4.53.v20231009.jar:/usr/share/java/kafka/scala-library-2.13.11.jar:/usr/share/java/kafka/jakarta.inject-2.6.1.jar:/usr/share/java/kafka/jetty-continuation-9.4.53.v20231009.jar:/usr/share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/share/java/kafka/connect-basic-auth-extension-7.6.0-ccs.jar:/usr/share/java/kafka/hk2-locator-2.6.1.jar:/usr/share/java/kafka/reflections-0.10.2.jar:/usr/share/java/kafka/slf4j-api-1.7.36.jar:/usr/share/java/kafka/jetty-server-9.4.53.v20231009.jar:/usr/share/java/kafka/paranamer-2.8.jar:/usr/share/java/kafka/commons-beanutils-1.9.4.jar:/usr/share/java/kafka/jaxb-api-2.3.1.jar:/usr/share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/share/java/kafka/hk2-utils-2.6.1.jar:/usr/share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/share/java/kafka/kafka-streams-examples-7.6.0-ccs.jar:/usr/share/java/kafka/reload4j-1.2.25.jar:/usr/share/java/kafka/connect-transforms-7.6.0-ccs.jar:/usr/share/java/kafka/jackson-core-2.13.5.jar:/usr/share/java/kafka/jersey-hk2-2.39.1.jar:/usr/share/java/kafka/jackson-databind-2.13.5.jar:/usr/share/java/kafka/jersey-client-2.39.1.jar:/usr/share/java/kafka/jetty-io-9.4.53.v20231009.jar:/usr/share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/share/java/kafka/connect-api-7.6.0-ccs.jar:/usr/share/java/kafka/commons-digester-2.1.jar:/usr/share/java/kafka/netty-transport-native-epoll-4.1.100.Final.jar:/usr/share/java/kafka/argparse4j-0.7.0.jar:/usr/share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/share/java/kafka/kafka-raft-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-streams-scala_2.13-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-group-coordinator-7.6.0-ccs.jar:/usr/share/java/kafka/audience-annotations-0.12.0.jar:/usr/share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/share/java/kafka/kafka-storage-api-7.6.0-ccs.jar:/usr/share/java/kafka/maven-artifact-3.8.8.jar:/usr/share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/share/java/kafka/kafka-metadata-7.6.0-ccs.jar:/usr/share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/share/java/kafka/jersey-server-2.39.1.jar:/usr/share/java/kafka/commons-lang3-3.8.1.jar:/usr/share/java/kafka/kafka-streams-test-utils-7.6.0-ccs.jar:/usr/share/java/kafka/jopt-simple-5.0.4.jar:/usr/share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/share/java/kafka/lz4-java-1.8.0.jar:/usr/share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/share/java/kafka/checker-qual-3.19.0.jar:/usr/share/java/kafka/connect-mirror-client-7.6.0-ccs.jar:/usr/share/java/kafka/pcollections-4.0.1.jar:/usr/share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/share/java/kafka/jetty-client-9.4.53.v20231009.jar:/usr/share/java/kafka/jetty-util-9.4.53.v20231009.jar:/usr/share/java/kafka/kafka-tools-api-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-streams-7.6.0-ccs.jar:/usr/share/java/kafka/kafka_2.13-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-clients-7.6.0-ccs.jar:/usr/share/java/kafka/commons-logging-1.2.jar:/usr/share/java/kafka/jsr305-3.0.2.jar:/usr/share/java/kafka/trogdor-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-shell-7.6.0-ccs.jar:/usr/share/java/kafka/netty-codec-4.1.100.Final.jar:/usr/share/java/kafka/netty-transport-native-unix-common-4.1.100.Final.jar:/usr/share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/share/java/kafka/metrics-core-2.2.0.jar:/usr/share/java/kafka/zookeeper-jute-3.8.3.jar:/usr/share/java/kafka/connect-runtime-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-storage-7.6.0-ccs.jar:/usr/share/java/kafka/netty-handler-4.1.100.Final.jar:/usr/share/java/kafka/commons-collections-3.2.2.jar:/usr/share/java/kafka/javassist-3.29.2-GA.jar:/usr/share/java/kafka/caffeine-2.9.3.jar:/usr/share/java/kafka/plexus-utils-3.3.1.jar:/usr/share/java/kafka/zookeeper-3.8.3.jar:/usr/share/java/kafka/activation-1.1.1.jar:/usr/share/java/kafka/netty-common-4.1.100.Final.jar:/usr/share/java/kafka/metrics-core-4.1.12.1.jar:/usr/share/java/kafka/jetty-servlet-9.4.53.v20231009.jar:/usr/share/java/kafka/jetty-util-ajax-9.4.53.v20231009.jar:/usr/share/java/kafka/snappy-java-1.1.10.5.jar:/usr/share/java/kafka/netty-transport-classes-epoll-4.1.100.Final.jar:/usr/share/java/kafka/jetty-security-9.4.53.v20231009.jar:/usr/share/java/cp-base-new/scala-logging_2.13-3.9.4.jar:/usr/share/java/cp-base-new/jackson-datatype-jdk8-2.14.2.jar:/usr/share/java/cp-base-new/jose4j-0.9.3.jar:/usr/share/java/cp-base-new/commons-validator-1.7.jar:/usr/share/java/cp-base-new/logredactor-1.0.12.jar:/usr/share/java/cp-base-new/jolokia-core-1.7.1.jar:/usr/share/java/cp-base-new/re2j-1.6.jar:/usr/share/java/cp-base-new/scala-library-2.13.10.jar:/usr/share/java/cp-base-new/commons-io-2.11.0.jar:/usr/share/java/cp-base-new/commons-cli-1.4.jar:/usr/share/java/cp-base-new/slf4j-reload4j-1.7.36.jar:/usr/share/java/cp-base-new/jackson-annotations-2.14.2.jar:/usr/share/java/cp-base-new/json-simple-1.1.1.jar:/usr/share/java/cp-base-new/jackson-module-scala_2.13-2.14.2.jar:/usr/share/java/cp-base-new/scala-java8-compat_2.13-1.0.2.jar:/usr/share/java/cp-base-new/kafka-server-common-7.6.0-ccs.jar:/usr/share/java/cp-base-new/minimal-json-0.9.5.jar:/usr/share/java/cp-base-new/zstd-jni-1.5.5-1.jar:/usr/share/java/cp-base-new/jackson-dataformat-yaml-2.14.2.jar:/usr/share/java/cp-base-new/slf4j-api-1.7.36.jar:/usr/share/java/cp-base-new/paranamer-2.8.jar:/usr/share/java/cp-base-new/commons-beanutils-1.9.4.jar:/usr/share/java/cp-base-new/jmx_prometheus_javaagent-0.18.0.jar:/usr/share/java/cp-base-new/reload4j-1.2.25.jar:/usr/share/java/cp-base-new/jackson-core-2.14.2.jar:/usr/share/java/cp-base-new/common-utils-7.6.0.jar:/usr/share/java/cp-base-new/commons-digester-2.1.jar:/usr/share/java/cp-base-new/argparse4j-0.7.0.jar:/usr/share/java/cp-base-new/kafka-raft-7.6.0-ccs.jar:/usr/share/java/cp-base-new/kafka-group-coordinator-7.6.0-ccs.jar:/usr/share/java/cp-base-new/audience-annotations-0.12.0.jar:/usr/share/java/cp-base-new/gson-2.9.0.jar:/usr/share/java/cp-base-new/snakeyaml-2.0.jar:/usr/share/java/cp-base-new/kafka-storage-api-7.6.0-ccs.jar:/usr/share/java/cp-base-new/kafka-metadata-7.6.0-ccs.jar:/usr/share/java/cp-base-new/disk-usage-agent-7.6.0.jar:/usr/share/java/cp-base-new/jopt-simple-5.0.4.jar:/usr/share/java/cp-base-new/error_prone_annotations-2.10.0.jar:/usr/share/java/cp-base-new/lz4-java-1.8.0.jar:/usr/share/java/cp-base-new/logredactor-metrics-1.0.12.jar:/usr/share/java/cp-base-new/checker-qual-3.19.0.jar:/usr/share/java/cp-base-new/pcollections-4.0.1.jar:/usr/share/java/cp-base-new/kafka-tools-api-7.6.0-ccs.jar:/usr/share/java/cp-base-new/kafka_2.13-7.6.0-ccs.jar:/usr/share/java/cp-base-new/kafka-clients-7.6.0-ccs.jar:/usr/share/java/cp-base-new/commons-logging-1.2.jar:/usr/share/java/cp-base-new/scala-reflect-2.13.10.jar:/usr/share/java/cp-base-new/scala-collection-compat_2.13-2.10.0.jar:/usr/share/java/cp-base-new/metrics-core-2.2.0.jar:/usr/share/java/cp-base-new/zookeeper-jute-3.8.3.jar:/usr/share/java/cp-base-new/jackson-dataformat-csv-2.14.2.jar:/usr/share/java/cp-base-new/utility-belt-7.6.0.jar:/usr/share/java/cp-base-new/kafka-storage-7.6.0-ccs.jar:/usr/share/java/cp-base-new/commons-collections-3.2.2.jar:/usr/share/java/cp-base-new/caffeine-2.9.3.jar:/usr/share/java/cp-base-new/jolokia-jvm-1.7.1.jar:/usr/share/java/cp-base-new/zookeeper-3.8.3.jar:/usr/share/java/cp-base-new/metrics-core-4.1.12.1.jar:/usr/share/java/cp-base-new/jackson-databind-2.14.2.jar:/usr/share/java/cp-base-new/snappy-java-1.1.10.5.jar (org.apache.zookeeper.ZooKeeper) 23:16:46 kafka | [2024-03-11 23:14:11,717] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) 23:16:46 kafka | [2024-03-11 23:14:11,717] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) 23:16:46 kafka | [2024-03-11 23:14:11,717] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) 23:16:46 kafka | [2024-03-11 23:14:11,717] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) 23:16:46 kafka | [2024-03-11 23:14:11,717] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) 23:16:46 kafka | [2024-03-11 23:14:11,717] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) 23:16:46 kafka | [2024-03-11 23:14:11,717] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) 23:16:46 kafka | [2024-03-11 23:14:11,717] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) 23:16:46 kafka | [2024-03-11 23:14:11,717] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) 23:16:46 kafka | [2024-03-11 23:14:11,717] INFO Client environment:os.memory.free=487MB (org.apache.zookeeper.ZooKeeper) 23:16:46 kafka | [2024-03-11 23:14:11,718] INFO Client environment:os.memory.max=8042MB (org.apache.zookeeper.ZooKeeper) 23:16:46 kafka | [2024-03-11 23:14:11,718] INFO Client environment:os.memory.total=504MB (org.apache.zookeeper.ZooKeeper) 23:16:46 kafka | [2024-03-11 23:14:11,724] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=40000 watcher=io.confluent.admin.utils.ZookeeperConnectionWatcher@184cf7cf (org.apache.zookeeper.ZooKeeper) 23:16:46 kafka | [2024-03-11 23:14:11,731] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) 23:16:46 kafka | [2024-03-11 23:14:11,737] INFO jute.maxbuffer value is 1048575 Bytes (org.apache.zookeeper.ClientCnxnSocket) 23:16:46 kafka | [2024-03-11 23:14:11,746] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) 23:16:46 kafka | [2024-03-11 23:14:11,786] INFO Opening socket connection to server zookeeper/172.17.0.4:2181. (org.apache.zookeeper.ClientCnxn) 23:16:46 kafka | [2024-03-11 23:14:11,787] INFO SASL config status: Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn) 23:16:46 kafka | [2024-03-11 23:14:11,802] INFO Socket connection established, initiating session, client: /172.17.0.6:46048, server: zookeeper/172.17.0.4:2181 (org.apache.zookeeper.ClientCnxn) 23:16:46 kafka | [2024-03-11 23:14:11,846] INFO Session establishment complete on server zookeeper/172.17.0.4:2181, session id = 0x10000039d6b0000, negotiated timeout = 40000 (org.apache.zookeeper.ClientCnxn) 23:16:46 kafka | [2024-03-11 23:14:11,967] INFO Session: 0x10000039d6b0000 closed (org.apache.zookeeper.ZooKeeper) 23:16:46 kafka | [2024-03-11 23:14:11,967] INFO EventThread shut down for session: 0x10000039d6b0000 (org.apache.zookeeper.ClientCnxn) 23:16:46 kafka | Using log4j config /etc/kafka/log4j.properties 23:16:46 kafka | ===> Launching ... 23:16:46 kafka | ===> Launching kafka ... 23:16:46 kafka | [2024-03-11 23:14:12,737] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$) 23:16:46 kafka | [2024-03-11 23:14:13,112] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) 23:16:46 kafka | [2024-03-11 23:14:13,208] INFO Registered signal handlers for TERM, INT, HUP (org.apache.kafka.common.utils.LoggingSignalHandler) 23:16:46 kafka | [2024-03-11 23:14:13,210] INFO starting (kafka.server.KafkaServer) 23:16:46 kafka | [2024-03-11 23:14:13,210] INFO Connecting to zookeeper on zookeeper:2181 (kafka.server.KafkaServer) 23:16:46 kafka | [2024-03-11 23:14:13,236] INFO [ZooKeeperClient Kafka server] Initializing a new session to zookeeper:2181. (kafka.zookeeper.ZooKeeperClient) 23:16:46 kafka | [2024-03-11 23:14:13,242] INFO Client environment:zookeeper.version=3.8.3-6ad6d364c7c0bcf0de452d54ebefa3058098ab56, built on 2023-10-05 10:34 UTC (org.apache.zookeeper.ZooKeeper) 23:16:46 kafka | [2024-03-11 23:14:13,242] INFO Client environment:host.name=93ddc2c219aa (org.apache.zookeeper.ZooKeeper) 23:16:46 kafka | [2024-03-11 23:14:13,242] INFO Client environment:java.version=11.0.21 (org.apache.zookeeper.ZooKeeper) 23:16:46 kafka | [2024-03-11 23:14:13,242] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) 23:16:46 kafka | [2024-03-11 23:14:13,242] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) 23:16:46 kafka | [2024-03-11 23:14:13,242] INFO Client environment:java.class.path=/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/jose4j-0.9.3.jar:/usr/bin/../share/java/kafka/commons-validator-1.7.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.100.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/commons-io-2.11.0.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/connect-mirror-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.11.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.100.Final.jar:/usr/bin/../share/java/kafka/jline-3.22.0.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-tools-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/connect-json-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.100.Final.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.5-1.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/scala-library-2.13.11.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/reflections-0.10.2.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/commons-beanutils-1.9.4.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/connect-transforms-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/connect-api-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/commons-digester-2.1.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-raft-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.8.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/checker-qual-3.19.0.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/pcollections-4.0.1.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-clients-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/jsr305-3.0.2.jar:/usr/bin/../share/java/kafka/trogdor-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-shell-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.100.Final.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.3.jar:/usr/bin/../share/java/kafka/connect-runtime-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-storage-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.100.Final.jar:/usr/bin/../share/java/kafka/commons-collections-3.2.2.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/caffeine-2.9.3.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.1.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.3.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/netty-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.53.v20231009.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.ZooKeeper) 23:16:46 kafka | [2024-03-11 23:14:13,242] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) 23:16:46 kafka | [2024-03-11 23:14:13,242] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) 23:16:46 kafka | [2024-03-11 23:14:13,242] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) 23:16:46 kafka | [2024-03-11 23:14:13,242] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) 23:16:46 kafka | [2024-03-11 23:14:13,242] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) 23:16:46 kafka | [2024-03-11 23:14:13,242] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) 23:16:46 kafka | [2024-03-11 23:14:13,242] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) 23:16:46 kafka | [2024-03-11 23:14:13,242] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) 23:16:46 kafka | [2024-03-11 23:14:13,242] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) 23:16:46 kafka | [2024-03-11 23:14:13,242] INFO Client environment:os.memory.free=1007MB (org.apache.zookeeper.ZooKeeper) 23:16:46 kafka | [2024-03-11 23:14:13,242] INFO Client environment:os.memory.max=1024MB (org.apache.zookeeper.ZooKeeper) 23:16:46 kafka | [2024-03-11 23:14:13,242] INFO Client environment:os.memory.total=1024MB (org.apache.zookeeper.ZooKeeper) 23:16:46 kafka | [2024-03-11 23:14:13,247] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=18000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@5b619d14 (org.apache.zookeeper.ZooKeeper) 23:16:46 kafka | [2024-03-11 23:14:13,253] INFO jute.maxbuffer value is 4194304 Bytes (org.apache.zookeeper.ClientCnxnSocket) 23:16:46 kafka | [2024-03-11 23:14:13,262] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.058203426Z level=info msg="Migration successfully executed" id="add index api_key.account_id_name" duration=825.401µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.063198923Z level=info msg="Executing migration" id="drop index IDX_api_key_account_id - v1" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.063988213Z level=info msg="Migration successfully executed" id="drop index IDX_api_key_account_id - v1" duration=788.57µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.067233106Z level=info msg="Executing migration" id="drop index UQE_api_key_key - v1" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.067977355Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_key - v1" duration=743.989µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.072693756Z level=info msg="Executing migration" id="drop index UQE_api_key_account_id_name - v1" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.073811215Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_account_id_name - v1" duration=1.117369ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.07755976Z level=info msg="Executing migration" id="Rename table api_key to api_key_v1 - v1" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.084343543Z level=info msg="Migration successfully executed" id="Rename table api_key to api_key_v1 - v1" duration=6.783133ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.08893898Z level=info msg="Executing migration" id="create api_key table v2" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.089475434Z level=info msg="Migration successfully executed" id="create api_key table v2" duration=536.034µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.09283835Z level=info msg="Executing migration" id="create index IDX_api_key_org_id - v2" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.093380694Z level=info msg="Migration successfully executed" id="create index IDX_api_key_org_id - v2" duration=541.935µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.096928885Z level=info msg="Executing migration" id="create index UQE_api_key_key - v2" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.098083804Z level=info msg="Migration successfully executed" id="create index UQE_api_key_key - v2" duration=1.153919ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.10260894Z level=info msg="Executing migration" id="create index UQE_api_key_org_id_name - v2" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.103346808Z level=info msg="Migration successfully executed" id="create index UQE_api_key_org_id_name - v2" duration=737.788µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.106772195Z level=info msg="Executing migration" id="copy api_key v1 to v2" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.107118924Z level=info msg="Migration successfully executed" id="copy api_key v1 to v2" duration=346.279µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.110350327Z level=info msg="Executing migration" id="Drop old table api_key_v1" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.110890831Z level=info msg="Migration successfully executed" id="Drop old table api_key_v1" duration=539.364µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.115153789Z level=info msg="Executing migration" id="Update api_key table charset" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.11517947Z level=info msg="Migration successfully executed" id="Update api_key table charset" duration=26.561µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.118514225Z level=info msg="Executing migration" id="Add expires to api_key table" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.120956068Z level=info msg="Migration successfully executed" id="Add expires to api_key table" duration=2.442503ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.124420377Z level=info msg="Executing migration" id="Add service account foreign key" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.126870918Z level=info msg="Migration successfully executed" id="Add service account foreign key" duration=2.449741ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.13122063Z level=info msg="Executing migration" id="set service account foreign key to nil if 0" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.131381524Z level=info msg="Migration successfully executed" id="set service account foreign key to nil if 0" duration=161.104µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.134846432Z level=info msg="Executing migration" id="Add last_used_at to api_key table" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.138027583Z level=info msg="Migration successfully executed" id="Add last_used_at to api_key table" duration=3.179221ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.141538374Z level=info msg="Executing migration" id="Add is_revoked column to api_key table" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.145118325Z level=info msg="Migration successfully executed" id="Add is_revoked column to api_key table" duration=3.580231ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.148534212Z level=info msg="Executing migration" id="create dashboard_snapshot table v4" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.149300421Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v4" duration=765.739µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.155647474Z level=info msg="Executing migration" id="drop table dashboard_snapshot_v4 #1" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.156200218Z level=info msg="Migration successfully executed" id="drop table dashboard_snapshot_v4 #1" duration=552.663µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.160097697Z level=info msg="Executing migration" id="create dashboard_snapshot table v5 #2" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.160938519Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v5 #2" duration=840.252µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.165863514Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_key - v5" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.167042285Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_key - v5" duration=1.173411ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.170716758Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_delete_key - v5" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.171470657Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_delete_key - v5" duration=753.539µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.174788813Z level=info msg="Executing migration" id="create index IDX_dashboard_snapshot_user_id - v5" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.175593493Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_snapshot_user_id - v5" duration=804.37µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.180171979Z level=info msg="Executing migration" id="alter dashboard_snapshot to mediumtext v2" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.180240402Z level=info msg="Migration successfully executed" id="alter dashboard_snapshot to mediumtext v2" duration=69.093µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.183561316Z level=info msg="Executing migration" id="Update dashboard_snapshot table charset" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.183591577Z level=info msg="Migration successfully executed" id="Update dashboard_snapshot table charset" duration=29.961µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.186729017Z level=info msg="Executing migration" id="Add column external_delete_url to dashboard_snapshots table" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.189445697Z level=info msg="Migration successfully executed" id="Add column external_delete_url to dashboard_snapshots table" duration=2.71651ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.192904365Z level=info msg="Executing migration" id="Add encrypted dashboard json column" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.195532432Z level=info msg="Migration successfully executed" id="Add encrypted dashboard json column" duration=2.627417ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.201150905Z level=info msg="Executing migration" id="Change dashboard_encrypted column to MEDIUMBLOB" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.201223517Z level=info msg="Migration successfully executed" id="Change dashboard_encrypted column to MEDIUMBLOB" duration=72.692µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.204658574Z level=info msg="Executing migration" id="create quota table v1" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.205811964Z level=info msg="Migration successfully executed" id="create quota table v1" duration=1.15299ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.209498098Z level=info msg="Executing migration" id="create index UQE_quota_org_id_user_id_target - v1" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.2107447Z level=info msg="Migration successfully executed" id="create index UQE_quota_org_id_user_id_target - v1" duration=1.242822ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.215301597Z level=info msg="Executing migration" id="Update quota table charset" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.215335527Z level=info msg="Migration successfully executed" id="Update quota table charset" duration=36.651µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.219841673Z level=info msg="Executing migration" id="create plugin_setting table" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.221286289Z level=info msg="Migration successfully executed" id="create plugin_setting table" duration=1.444056ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.224984833Z level=info msg="Executing migration" id="create index UQE_plugin_setting_org_id_plugin_id - v1" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.226363109Z level=info msg="Migration successfully executed" id="create index UQE_plugin_setting_org_id_plugin_id - v1" duration=1.376666ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.230111414Z level=info msg="Executing migration" id="Add column plugin_version to plugin_settings" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.232954827Z level=info msg="Migration successfully executed" id="Add column plugin_version to plugin_settings" duration=2.843613ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.237385691Z level=info msg="Executing migration" id="Update plugin_setting table charset" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.237415891Z level=info msg="Migration successfully executed" id="Update plugin_setting table charset" duration=30.931µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.240848999Z level=info msg="Executing migration" id="create session table" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.24168498Z level=info msg="Migration successfully executed" id="create session table" duration=835.561µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.245232111Z level=info msg="Executing migration" id="Drop old table playlist table" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.245359414Z level=info msg="Migration successfully executed" id="Drop old table playlist table" duration=127.343µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.250228659Z level=info msg="Executing migration" id="Drop old table playlist_item table" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.250356432Z level=info msg="Migration successfully executed" id="Drop old table playlist_item table" duration=127.134µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.254237051Z level=info msg="Executing migration" id="create playlist table v2" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.255504223Z level=info msg="Migration successfully executed" id="create playlist table v2" duration=1.264351ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.259376082Z level=info msg="Executing migration" id="create playlist item table v2" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.260126061Z level=info msg="Migration successfully executed" id="create playlist item table v2" duration=751.619µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.268378802Z level=info msg="Executing migration" id="Update playlist table charset" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.268424523Z level=info msg="Migration successfully executed" id="Update playlist table charset" duration=46.262µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.272124417Z level=info msg="Executing migration" id="Update playlist_item table charset" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.272166038Z level=info msg="Migration successfully executed" id="Update playlist_item table charset" duration=42.141µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.276067978Z level=info msg="Executing migration" id="Add playlist column created_at" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.280645795Z level=info msg="Migration successfully executed" id="Add playlist column created_at" duration=4.576506ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.284055432Z level=info msg="Executing migration" id="Add playlist column updated_at" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.286994077Z level=info msg="Migration successfully executed" id="Add playlist column updated_at" duration=2.935895ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.291488861Z level=info msg="Executing migration" id="drop preferences table v2" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.291573903Z level=info msg="Migration successfully executed" id="drop preferences table v2" duration=84.612µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.294921939Z level=info msg="Executing migration" id="drop preferences table v3" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.295006511Z level=info msg="Migration successfully executed" id="drop preferences table v3" duration=84.262µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.298432639Z level=info msg="Executing migration" id="create preferences table v3" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.299210979Z level=info msg="Migration successfully executed" id="create preferences table v3" duration=777.57µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.302944414Z level=info msg="Executing migration" id="Update preferences table charset" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.302988105Z level=info msg="Migration successfully executed" id="Update preferences table charset" duration=44.291µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.30786238Z level=info msg="Executing migration" id="Add column team_id in preferences" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.310893777Z level=info msg="Migration successfully executed" id="Add column team_id in preferences" duration=3.031427ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.314397906Z level=info msg="Executing migration" id="Update team_id column values in preferences" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.31456525Z level=info msg="Migration successfully executed" id="Update team_id column values in preferences" duration=168.894µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.318095041Z level=info msg="Executing migration" id="Add column week_start in preferences" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.3211842Z level=info msg="Migration successfully executed" id="Add column week_start in preferences" duration=3.088859ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.325641153Z level=info msg="Executing migration" id="Add column preferences.json_data" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.328919718Z level=info msg="Migration successfully executed" id="Add column preferences.json_data" duration=3.278165ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.370290523Z level=info msg="Executing migration" id="alter preferences.json_data to mediumtext v1" 23:16:46 mariadb | 2024-03-11 23:14:10 0 [Note] InnoDB: log sequence number 46590; transaction id 14 23:16:46 mariadb | 2024-03-11 23:14:10 0 [Note] Plugin 'FEEDBACK' is disabled. 23:16:46 mariadb | 2024-03-11 23:14:10 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. 23:16:46 mariadb | 2024-03-11 23:14:10 0 [Warning] 'user' entry 'root@mariadb' ignored in --skip-name-resolve mode. 23:16:46 mariadb | 2024-03-11 23:14:10 0 [Warning] 'proxies_priv' entry '@% root@mariadb' ignored in --skip-name-resolve mode. 23:16:46 mariadb | 2024-03-11 23:14:10 0 [Note] mariadbd: ready for connections. 23:16:46 mariadb | Version: '10.10.2-MariaDB-1:10.10.2+maria~ubu2204' socket: '/run/mysqld/mysqld.sock' port: 0 mariadb.org binary distribution 23:16:46 mariadb | 2024-03-11 23:14:11+00:00 [Note] [Entrypoint]: Temporary server started. 23:16:46 mariadb | 2024-03-11 23:14:13+00:00 [Note] [Entrypoint]: Creating user policy_user 23:16:46 mariadb | 2024-03-11 23:14:13+00:00 [Note] [Entrypoint]: Securing system users (equivalent to running mysql_secure_installation) 23:16:46 mariadb | 23:16:46 mariadb | 23:16:46 mariadb | 2024-03-11 23:14:13+00:00 [Note] [Entrypoint]: /usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/db.sh 23:16:46 mariadb | 2024-03-11 23:14:13+00:00 [Warn] [Entrypoint]: /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/db.conf 23:16:46 mariadb | #!/bin/bash -xv 23:16:46 mariadb | # Copyright 2019,2021 AT&T Intellectual Property. All rights reserved 23:16:46 mariadb | # Modifications Copyright (c) 2022 Nordix Foundation. 23:16:46 mariadb | # 23:16:46 mariadb | # Licensed under the Apache License, Version 2.0 (the "License"); 23:16:46 mariadb | # you may not use this file except in compliance with the License. 23:16:46 mariadb | # You may obtain a copy of the License at 23:16:46 mariadb | # 23:16:46 mariadb | # http://www.apache.org/licenses/LICENSE-2.0 23:16:46 mariadb | # 23:16:46 mariadb | # Unless required by applicable law or agreed to in writing, software 23:16:46 mariadb | # distributed under the License is distributed on an "AS IS" BASIS, 23:16:46 mariadb | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 23:16:46 mariadb | # See the License for the specific language governing permissions and 23:16:46 mariadb | # limitations under the License. 23:16:46 mariadb | 23:16:46 mariadb | for db in migration pooling policyadmin operationshistory clampacm policyclamp 23:16:46 mariadb | do 23:16:46 mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "CREATE DATABASE IF NOT EXISTS ${db};" 23:16:46 mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "GRANT ALL PRIVILEGES ON \`${db}\`.* TO '${MYSQL_USER}'@'%' ;" 23:16:46 mariadb | done 23:16:46 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 23:16:46 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS migration;' 23:16:46 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `migration`.* TO '\''policy_user'\''@'\''%'\'' ;' 23:16:46 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 23:16:46 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS pooling;' 23:16:46 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `pooling`.* TO '\''policy_user'\''@'\''%'\'' ;' 23:16:46 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 23:16:46 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS policyadmin;' 23:16:46 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `policyadmin`.* TO '\''policy_user'\''@'\''%'\'' ;' 23:16:46 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 23:16:46 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS operationshistory;' 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.370482658Z level=info msg="Migration successfully executed" id="alter preferences.json_data to mediumtext v1" duration=188.625µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.376246785Z level=info msg="Executing migration" id="Add preferences index org_id" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.37718881Z level=info msg="Migration successfully executed" id="Add preferences index org_id" duration=942.475µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.384425464Z level=info msg="Executing migration" id="Add preferences index user_id" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.385682286Z level=info msg="Migration successfully executed" id="Add preferences index user_id" duration=1.263012ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.390649594Z level=info msg="Executing migration" id="create alert table v1" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.39208371Z level=info msg="Migration successfully executed" id="create alert table v1" duration=1.434476ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.39799073Z level=info msg="Executing migration" id="add index alert org_id & id " 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.399815897Z level=info msg="Migration successfully executed" id="add index alert org_id & id " duration=1.827677ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.408129159Z level=info msg="Executing migration" id="add index alert state" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.409979427Z level=info msg="Migration successfully executed" id="add index alert state" duration=1.857678ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.415622251Z level=info msg="Executing migration" id="add index alert dashboard_id" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.416711649Z level=info msg="Migration successfully executed" id="add index alert dashboard_id" duration=1.086688ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.421557813Z level=info msg="Executing migration" id="Create alert_rule_tag table v1" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.422272241Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v1" duration=713.658µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.425899963Z level=info msg="Executing migration" id="Add unique index alert_rule_tag.alert_id_tag_id" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.427192976Z level=info msg="Migration successfully executed" id="Add unique index alert_rule_tag.alert_id_tag_id" duration=1.291093ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.433925258Z level=info msg="Executing migration" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.435281133Z level=info msg="Migration successfully executed" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" duration=1.350355ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.441761438Z level=info msg="Executing migration" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.452463842Z level=info msg="Migration successfully executed" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" duration=10.697693ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.457049138Z level=info msg="Executing migration" id="Create alert_rule_tag table v2" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.457748967Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v2" duration=702.039µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.463405861Z level=info msg="Executing migration" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.464251463Z level=info msg="Migration successfully executed" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" duration=845.251µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.468611963Z level=info msg="Executing migration" id="copy alert_rule_tag v1 to v2" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.468885531Z level=info msg="Migration successfully executed" id="copy alert_rule_tag v1 to v2" duration=273.688µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.472670107Z level=info msg="Executing migration" id="drop table alert_rule_tag_v1" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.47317104Z level=info msg="Migration successfully executed" id="drop table alert_rule_tag_v1" duration=500.613µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.47904109Z level=info msg="Executing migration" id="create alert_notification table v1" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.479801239Z level=info msg="Migration successfully executed" id="create alert_notification table v1" duration=755.059µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.48374453Z level=info msg="Executing migration" id="Add column is_default" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.487169837Z level=info msg="Migration successfully executed" id="Add column is_default" duration=3.424677ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.492127714Z level=info msg="Executing migration" id="Add column frequency" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.49551833Z level=info msg="Migration successfully executed" id="Add column frequency" duration=3.389786ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.500419576Z level=info msg="Executing migration" id="Add column send_reminder" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.503820402Z level=info msg="Migration successfully executed" id="Add column send_reminder" duration=3.401646ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.507553248Z level=info msg="Executing migration" id="Add column disable_resolve_message" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.511086899Z level=info msg="Migration successfully executed" id="Add column disable_resolve_message" duration=3.532621ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.514723851Z level=info msg="Executing migration" id="add index alert_notification org_id & name" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.515587103Z level=info msg="Migration successfully executed" id="add index alert_notification org_id & name" duration=862.942µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.519473442Z level=info msg="Executing migration" id="Update alert table charset" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.519506363Z level=info msg="Migration successfully executed" id="Update alert table charset" duration=33.311µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.52449444Z level=info msg="Executing migration" id="Update alert_notification table charset" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.524538701Z level=info msg="Migration successfully executed" id="Update alert_notification table charset" duration=45.821µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.528575204Z level=info msg="Executing migration" id="create notification_journal table v1" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.529698903Z level=info msg="Migration successfully executed" id="create notification_journal table v1" duration=1.123209ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.533582543Z level=info msg="Executing migration" id="add index notification_journal org_id & alert_id & notifier_id" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.534432614Z level=info msg="Migration successfully executed" id="add index notification_journal org_id & alert_id & notifier_id" duration=849.561µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.541596237Z level=info msg="Executing migration" id="drop alert_notification_journal" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.542686595Z level=info msg="Migration successfully executed" id="drop alert_notification_journal" duration=1.087458ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.546785779Z level=info msg="Executing migration" id="create alert_notification_state table v1" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.548030981Z level=info msg="Migration successfully executed" id="create alert_notification_state table v1" duration=1.244882ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.552207078Z level=info msg="Executing migration" id="add index alert_notification_state org_id & alert_id & notifier_id" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.55306852Z level=info msg="Migration successfully executed" id="add index alert_notification_state org_id & alert_id & notifier_id" duration=858.272µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.55851981Z level=info msg="Executing migration" id="Add for to alert table" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.564320017Z level=info msg="Migration successfully executed" id="Add for to alert table" duration=5.799648ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.569599592Z level=info msg="Executing migration" id="Add column uid in alert_notification" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.573268836Z level=info msg="Migration successfully executed" id="Add column uid in alert_notification" duration=3.668364ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.576856967Z level=info msg="Executing migration" id="Update uid column values in alert_notification" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.577037472Z level=info msg="Migration successfully executed" id="Update uid column values in alert_notification" duration=180.235µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.581933387Z level=info msg="Executing migration" id="Add unique index alert_notification_org_id_uid" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.583133987Z level=info msg="Migration successfully executed" id="Add unique index alert_notification_org_id_uid" duration=1.1989ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.587243952Z level=info msg="Executing migration" id="Remove unique index org_id_name" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.588509534Z level=info msg="Migration successfully executed" id="Remove unique index org_id_name" duration=1.265352ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.592350453Z level=info msg="Executing migration" id="Add column secure_settings in alert_notification" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.595935464Z level=info msg="Migration successfully executed" id="Add column secure_settings in alert_notification" duration=3.584471ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.602069031Z level=info msg="Executing migration" id="alter alert.settings to mediumtext" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.602150863Z level=info msg="Migration successfully executed" id="alter alert.settings to mediumtext" duration=82.482µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.605855657Z level=info msg="Executing migration" id="Add non-unique index alert_notification_state_alert_id" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.606663358Z level=info msg="Migration successfully executed" id="Add non-unique index alert_notification_state_alert_id" duration=809.401µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.610745502Z level=info msg="Executing migration" id="Add non-unique index alert_rule_tag_alert_id" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.612636741Z level=info msg="Migration successfully executed" id="Add non-unique index alert_rule_tag_alert_id" duration=1.889769ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.618997713Z level=info msg="Executing migration" id="Drop old annotation table v4" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.61929434Z level=info msg="Migration successfully executed" id="Drop old annotation table v4" duration=296.097µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.623043297Z level=info msg="Executing migration" id="create annotation table v5" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.624811431Z level=info msg="Migration successfully executed" id="create annotation table v5" duration=1.767814ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.6286641Z level=info msg="Executing migration" id="add index annotation 0 v3" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.629646875Z level=info msg="Migration successfully executed" id="add index annotation 0 v3" duration=985.495µs 23:16:46 zookeeper_1 | [2024-03-11 23:14:10,350] INFO zookeeper.pathStats.enabled = false (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 23:16:46 zookeeper_1 | [2024-03-11 23:14:10,353] INFO The max bytes for all large requests are set to 104857600 (org.apache.zookeeper.server.ZooKeeperServer) 23:16:46 zookeeper_1 | [2024-03-11 23:14:10,353] INFO The large request threshold is set to -1 (org.apache.zookeeper.server.ZooKeeperServer) 23:16:46 zookeeper_1 | [2024-03-11 23:14:10,353] INFO zookeeper.enforce.auth.enabled = false (org.apache.zookeeper.server.AuthenticationHelper) 23:16:46 zookeeper_1 | [2024-03-11 23:14:10,353] INFO zookeeper.enforce.auth.schemes = [] (org.apache.zookeeper.server.AuthenticationHelper) 23:16:46 zookeeper_1 | [2024-03-11 23:14:10,353] INFO Created server with tickTime 2000 ms minSessionTimeout 4000 ms maxSessionTimeout 40000 ms clientPortListenBacklog -1 datadir /var/lib/zookeeper/log/version-2 snapdir /var/lib/zookeeper/data/version-2 (org.apache.zookeeper.server.ZooKeeperServer) 23:16:46 zookeeper_1 | [2024-03-11 23:14:10,379] INFO Logging initialized @515ms to org.eclipse.jetty.util.log.Slf4jLog (org.eclipse.jetty.util.log) 23:16:46 zookeeper_1 | [2024-03-11 23:14:10,481] WARN o.e.j.s.ServletContextHandler@415b0b49{/,null,STOPPED} contextPath ends with /* (org.eclipse.jetty.server.handler.ContextHandler) 23:16:46 zookeeper_1 | [2024-03-11 23:14:10,481] WARN Empty contextPath (org.eclipse.jetty.server.handler.ContextHandler) 23:16:46 zookeeper_1 | [2024-03-11 23:14:10,504] INFO jetty-9.4.53.v20231009; built: 2023-10-09T12:29:09.265Z; git: 27bde00a0b95a1d5bbee0eae7984f891d2d0f8c9; jvm 11.0.21+9-LTS (org.eclipse.jetty.server.Server) 23:16:46 zookeeper_1 | [2024-03-11 23:14:10,556] INFO DefaultSessionIdManager workerName=node0 (org.eclipse.jetty.server.session) 23:16:46 zookeeper_1 | [2024-03-11 23:14:10,556] INFO No SessionScavenger set, using defaults (org.eclipse.jetty.server.session) 23:16:46 zookeeper_1 | [2024-03-11 23:14:10,557] INFO node0 Scavenging every 600000ms (org.eclipse.jetty.server.session) 23:16:46 zookeeper_1 | [2024-03-11 23:14:10,563] WARN ServletContext@o.e.j.s.ServletContextHandler@415b0b49{/,null,STARTING} has uncovered http methods for path: /* (org.eclipse.jetty.security.SecurityHandler) 23:16:46 zookeeper_1 | [2024-03-11 23:14:10,572] INFO Started o.e.j.s.ServletContextHandler@415b0b49{/,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler) 23:16:46 zookeeper_1 | [2024-03-11 23:14:10,590] INFO Started ServerConnector@6b695b06{HTTP/1.1, (http/1.1)}{0.0.0.0:8080} (org.eclipse.jetty.server.AbstractConnector) 23:16:46 zookeeper_1 | [2024-03-11 23:14:10,590] INFO Started @727ms (org.eclipse.jetty.server.Server) 23:16:46 zookeeper_1 | [2024-03-11 23:14:10,590] INFO Started AdminServer on address 0.0.0.0, port 8080 and command URL /commands (org.apache.zookeeper.server.admin.JettyAdminServer) 23:16:46 zookeeper_1 | [2024-03-11 23:14:10,595] INFO Using org.apache.zookeeper.server.NIOServerCnxnFactory as server connection factory (org.apache.zookeeper.server.ServerCnxnFactory) 23:16:46 zookeeper_1 | [2024-03-11 23:14:10,595] WARN maxCnxns is not configured, using default value 0. (org.apache.zookeeper.server.ServerCnxnFactory) 23:16:46 zookeeper_1 | [2024-03-11 23:14:10,597] INFO Configuring NIO connection handler with 10s sessionless connection timeout, 2 selector thread(s), 16 worker threads, and 64 kB direct buffers. (org.apache.zookeeper.server.NIOServerCnxnFactory) 23:16:46 zookeeper_1 | [2024-03-11 23:14:10,598] INFO binding to port 0.0.0.0/0.0.0.0:2181 (org.apache.zookeeper.server.NIOServerCnxnFactory) 23:16:46 zookeeper_1 | [2024-03-11 23:14:10,619] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) 23:16:46 zookeeper_1 | [2024-03-11 23:14:10,620] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) 23:16:46 zookeeper_1 | [2024-03-11 23:14:10,621] INFO zookeeper.snapshotSizeFactor = 0.33 (org.apache.zookeeper.server.ZKDatabase) 23:16:46 zookeeper_1 | [2024-03-11 23:14:10,621] INFO zookeeper.commitLogCount=500 (org.apache.zookeeper.server.ZKDatabase) 23:16:46 zookeeper_1 | [2024-03-11 23:14:10,629] INFO zookeeper.snapshot.compression.method = CHECKED (org.apache.zookeeper.server.persistence.SnapStream) 23:16:46 zookeeper_1 | [2024-03-11 23:14:10,629] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) 23:16:46 zookeeper_1 | [2024-03-11 23:14:10,633] INFO Snapshot loaded in 11 ms, highest zxid is 0x0, digest is 1371985504 (org.apache.zookeeper.server.ZKDatabase) 23:16:46 zookeeper_1 | [2024-03-11 23:14:10,634] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) 23:16:46 zookeeper_1 | [2024-03-11 23:14:10,634] INFO Snapshot taken in 1 ms (org.apache.zookeeper.server.ZooKeeperServer) 23:16:46 zookeeper_1 | [2024-03-11 23:14:10,642] INFO zookeeper.request_throttler.shutdownTimeout = 10000 ms (org.apache.zookeeper.server.RequestThrottler) 23:16:46 zookeeper_1 | [2024-03-11 23:14:10,642] INFO PrepRequestProcessor (sid:0) started, reconfigEnabled=false (org.apache.zookeeper.server.PrepRequestProcessor) 23:16:46 zookeeper_1 | [2024-03-11 23:14:10,663] INFO Using checkIntervalMs=60000 maxPerMinute=10000 maxNeverUsedIntervalMs=0 (org.apache.zookeeper.server.ContainerManager) 23:16:46 zookeeper_1 | [2024-03-11 23:14:10,665] INFO ZooKeeper audit is disabled. (org.apache.zookeeper.audit.ZKAuditProvider) 23:16:46 zookeeper_1 | [2024-03-11 23:14:11,824] INFO Creating new log file: log.1 (org.apache.zookeeper.server.persistence.FileTxnLog) 23:16:46 kafka | [2024-03-11 23:14:13,266] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient) 23:16:46 kafka | [2024-03-11 23:14:13,269] INFO Opening socket connection to server zookeeper/172.17.0.4:2181. (org.apache.zookeeper.ClientCnxn) 23:16:46 kafka | [2024-03-11 23:14:13,277] INFO Socket connection established, initiating session, client: /172.17.0.6:46050, server: zookeeper/172.17.0.4:2181 (org.apache.zookeeper.ClientCnxn) 23:16:46 kafka | [2024-03-11 23:14:13,287] INFO Session establishment complete on server zookeeper/172.17.0.4:2181, session id = 0x10000039d6b0001, negotiated timeout = 18000 (org.apache.zookeeper.ClientCnxn) 23:16:46 kafka | [2024-03-11 23:14:13,293] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient) 23:16:46 kafka | [2024-03-11 23:14:13,686] INFO Cluster ID = OdmwtGb8RBC2kzsuX5kwmQ (kafka.server.KafkaServer) 23:16:46 kafka | [2024-03-11 23:14:13,691] WARN No meta.properties file under dir /var/lib/kafka/data/meta.properties (kafka.server.BrokerMetadataCheckpoint) 23:16:46 kafka | [2024-03-11 23:14:13,749] INFO KafkaConfig values: 23:16:46 kafka | advertised.listeners = PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092 23:16:46 kafka | alter.config.policy.class.name = null 23:16:46 kafka | alter.log.dirs.replication.quota.window.num = 11 23:16:46 kafka | alter.log.dirs.replication.quota.window.size.seconds = 1 23:16:46 kafka | authorizer.class.name = 23:16:46 kafka | auto.create.topics.enable = true 23:16:46 kafka | auto.include.jmx.reporter = true 23:16:46 kafka | auto.leader.rebalance.enable = true 23:16:46 kafka | background.threads = 10 23:16:46 kafka | broker.heartbeat.interval.ms = 2000 23:16:46 kafka | broker.id = 1 23:16:46 kafka | broker.id.generation.enable = true 23:16:46 kafka | broker.rack = null 23:16:46 kafka | broker.session.timeout.ms = 9000 23:16:46 kafka | client.quota.callback.class = null 23:16:46 kafka | compression.type = producer 23:16:46 kafka | connection.failed.authentication.delay.ms = 100 23:16:46 kafka | connections.max.idle.ms = 600000 23:16:46 kafka | connections.max.reauth.ms = 0 23:16:46 kafka | control.plane.listener.name = null 23:16:46 kafka | controlled.shutdown.enable = true 23:16:46 kafka | controlled.shutdown.max.retries = 3 23:16:46 kafka | controlled.shutdown.retry.backoff.ms = 5000 23:16:46 kafka | controller.listener.names = null 23:16:46 kafka | controller.quorum.append.linger.ms = 25 23:16:46 kafka | controller.quorum.election.backoff.max.ms = 1000 23:16:46 kafka | controller.quorum.election.timeout.ms = 1000 23:16:46 kafka | controller.quorum.fetch.timeout.ms = 2000 23:16:46 kafka | controller.quorum.request.timeout.ms = 2000 23:16:46 kafka | controller.quorum.retry.backoff.ms = 20 23:16:46 kafka | controller.quorum.voters = [] 23:16:46 kafka | controller.quota.window.num = 11 23:16:46 kafka | controller.quota.window.size.seconds = 1 23:16:46 kafka | controller.socket.timeout.ms = 30000 23:16:46 kafka | create.topic.policy.class.name = null 23:16:46 kafka | default.replication.factor = 1 23:16:46 kafka | delegation.token.expiry.check.interval.ms = 3600000 23:16:46 kafka | delegation.token.expiry.time.ms = 86400000 23:16:46 kafka | delegation.token.master.key = null 23:16:46 kafka | delegation.token.max.lifetime.ms = 604800000 23:16:46 kafka | delegation.token.secret.key = null 23:16:46 kafka | delete.records.purgatory.purge.interval.requests = 1 23:16:46 kafka | delete.topic.enable = true 23:16:46 kafka | early.start.listeners = null 23:16:46 kafka | fetch.max.bytes = 57671680 23:16:46 kafka | fetch.purgatory.purge.interval.requests = 1000 23:16:46 kafka | group.consumer.assignors = [org.apache.kafka.coordinator.group.assignor.RangeAssignor] 23:16:46 kafka | group.consumer.heartbeat.interval.ms = 5000 23:16:46 kafka | group.consumer.max.heartbeat.interval.ms = 15000 23:16:46 kafka | group.consumer.max.session.timeout.ms = 60000 23:16:46 kafka | group.consumer.max.size = 2147483647 23:16:46 kafka | group.consumer.min.heartbeat.interval.ms = 5000 23:16:46 kafka | group.consumer.min.session.timeout.ms = 45000 23:16:46 kafka | group.consumer.session.timeout.ms = 45000 23:16:46 kafka | group.coordinator.new.enable = false 23:16:46 kafka | group.coordinator.threads = 1 23:16:46 kafka | group.initial.rebalance.delay.ms = 3000 23:16:46 kafka | group.max.session.timeout.ms = 1800000 23:16:46 kafka | group.max.size = 2147483647 23:16:46 kafka | group.min.session.timeout.ms = 6000 23:16:46 policy-api | Waiting for mariadb port 3306... 23:16:46 policy-api | mariadb (172.17.0.2:3306) open 23:16:46 policy-api | Waiting for policy-db-migrator port 6824... 23:16:46 policy-api | policy-db-migrator (172.17.0.7:6824) open 23:16:46 policy-api | Policy api config file: /opt/app/policy/api/etc/apiParameters.yaml 23:16:46 policy-api | 23:16:46 policy-api | . ____ _ __ _ _ 23:16:46 policy-api | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ 23:16:46 policy-api | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ 23:16:46 policy-api | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) 23:16:46 policy-api | ' |____| .__|_| |_|_| |_\__, | / / / / 23:16:46 policy-api | =========|_|==============|___/=/_/_/_/ 23:16:46 policy-api | :: Spring Boot :: (v3.1.8) 23:16:46 policy-api | 23:16:46 policy-api | [2024-03-11T23:14:23.817+00:00|INFO|PolicyApiApplication|main] Starting PolicyApiApplication using Java 17.0.10 with PID 21 (/app/api.jar started by policy in /opt/app/policy/api/bin) 23:16:46 policy-api | [2024-03-11T23:14:23.820+00:00|INFO|PolicyApiApplication|main] No active profile set, falling back to 1 default profile: "default" 23:16:46 policy-api | [2024-03-11T23:14:25.670+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. 23:16:46 policy-api | [2024-03-11T23:14:25.772+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 91 ms. Found 6 JPA repository interfaces. 23:16:46 policy-api | [2024-03-11T23:14:26.246+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.api.main.exception.ServiceExceptionHandler 23:16:46 policy-api | [2024-03-11T23:14:26.247+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.api.main.exception.ServiceExceptionHandler 23:16:46 policy-api | [2024-03-11T23:14:26.970+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port(s): 6969 (http) 23:16:46 policy-api | [2024-03-11T23:14:26.985+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] 23:16:46 policy-api | [2024-03-11T23:14:26.987+00:00|INFO|StandardService|main] Starting service [Tomcat] 23:16:46 policy-api | [2024-03-11T23:14:26.987+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.18] 23:16:46 policy-api | [2024-03-11T23:14:27.103+00:00|INFO|[/policy/api/v1]|main] Initializing Spring embedded WebApplicationContext 23:16:46 policy-api | [2024-03-11T23:14:27.104+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 3211 ms 23:16:46 policy-api | [2024-03-11T23:14:27.573+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] 23:16:46 policy-api | [2024-03-11T23:14:27.660+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.3.0.CR1 23:16:46 policy-api | [2024-03-11T23:14:27.665+00:00|INFO|Environment|main] HHH000406: Using bytecode reflection optimizer 23:16:46 policy-api | [2024-03-11T23:14:27.720+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled 23:16:46 policy-api | [2024-03-11T23:14:28.080+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer 23:16:46 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `operationshistory`.* TO '\''policy_user'\''@'\''%'\'' ;' 23:16:46 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 23:16:46 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS clampacm;' 23:16:46 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `clampacm`.* TO '\''policy_user'\''@'\''%'\'' ;' 23:16:46 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 23:16:46 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS policyclamp;' 23:16:46 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `policyclamp`.* TO '\''policy_user'\''@'\''%'\'' ;' 23:16:46 mariadb | 23:16:46 mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "FLUSH PRIVILEGES;" 23:16:46 mariadb | + mysql -uroot -psecret --execute 'FLUSH PRIVILEGES;' 23:16:46 mariadb | mysql -u"${MYSQL_USER}" -p"${MYSQL_PASSWORD}" -f policyclamp < /tmp/policy-clamp-create-tables.sql 23:16:46 mariadb | + mysql -upolicy_user -ppolicy_user -f policyclamp 23:16:46 mariadb | 23:16:46 mariadb | 2024-03-11 23:14:13+00:00 [Note] [Entrypoint]: Stopping temporary server 23:16:46 mariadb | 2024-03-11 23:14:13 0 [Note] mariadbd (initiated by: unknown): Normal shutdown 23:16:46 mariadb | 2024-03-11 23:14:13 0 [Note] InnoDB: FTS optimize thread exiting. 23:16:46 mariadb | 2024-03-11 23:14:13 0 [Note] InnoDB: Starting shutdown... 23:16:46 mariadb | 2024-03-11 23:14:13 0 [Note] InnoDB: Dumping buffer pool(s) to /var/lib/mysql/ib_buffer_pool 23:16:46 mariadb | 2024-03-11 23:14:13 0 [Note] InnoDB: Buffer pool(s) dump completed at 240311 23:14:13 23:16:46 mariadb | 2024-03-11 23:14:14 0 [Note] InnoDB: Removed temporary tablespace data file: "./ibtmp1" 23:16:46 mariadb | 2024-03-11 23:14:14 0 [Note] InnoDB: Shutdown completed; log sequence number 339704; transaction id 298 23:16:46 mariadb | 2024-03-11 23:14:14 0 [Note] mariadbd: Shutdown complete 23:16:46 mariadb | 23:16:46 mariadb | 2024-03-11 23:14:14+00:00 [Note] [Entrypoint]: Temporary server stopped 23:16:46 mariadb | 23:16:46 mariadb | 2024-03-11 23:14:14+00:00 [Note] [Entrypoint]: MariaDB init process done. Ready for start up. 23:16:46 mariadb | 23:16:46 mariadb | 2024-03-11 23:14:14 0 [Note] mariadbd (server 10.10.2-MariaDB-1:10.10.2+maria~ubu2204) starting as process 1 ... 23:16:46 mariadb | 2024-03-11 23:14:14 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 23:16:46 mariadb | 2024-03-11 23:14:14 0 [Note] InnoDB: Number of transaction pools: 1 23:16:46 mariadb | 2024-03-11 23:14:14 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions 23:16:46 mariadb | 2024-03-11 23:14:14 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts) 23:16:46 mariadb | 2024-03-11 23:14:14 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) 23:16:46 mariadb | 2024-03-11 23:14:14 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF 23:16:46 mariadb | 2024-03-11 23:14:14 0 [Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB 23:16:46 mariadb | 2024-03-11 23:14:14 0 [Note] InnoDB: Completed initialization of buffer pool 23:16:46 mariadb | 2024-03-11 23:14:14 0 [Note] InnoDB: File system buffers for log disabled (block size=512 bytes) 23:16:46 mariadb | 2024-03-11 23:14:14 0 [Note] InnoDB: 128 rollback segments are active. 23:16:46 mariadb | 2024-03-11 23:14:14 0 [Note] InnoDB: Setting file './ibtmp1' size to 12.000MiB. Physically writing the file full; Please wait ... 23:16:46 mariadb | 2024-03-11 23:14:14 0 [Note] InnoDB: File './ibtmp1' size is now 12.000MiB. 23:16:46 mariadb | 2024-03-11 23:14:14 0 [Note] InnoDB: log sequence number 339704; transaction id 299 23:16:46 mariadb | 2024-03-11 23:14:14 0 [Note] InnoDB: Loading buffer pool(s) from /var/lib/mysql/ib_buffer_pool 23:16:46 mariadb | 2024-03-11 23:14:14 0 [Note] Plugin 'FEEDBACK' is disabled. 23:16:46 mariadb | 2024-03-11 23:14:14 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. 23:16:46 mariadb | 2024-03-11 23:14:14 0 [Warning] You need to use --log-bin to make --expire-logs-days or --binlog-expire-logs-seconds work. 23:16:46 mariadb | 2024-03-11 23:14:14 0 [Note] Server socket created on IP: '0.0.0.0'. 23:16:46 mariadb | 2024-03-11 23:14:14 0 [Note] Server socket created on IP: '::'. 23:16:46 mariadb | 2024-03-11 23:14:14 0 [Note] mariadbd: ready for connections. 23:16:46 mariadb | Version: '10.10.2-MariaDB-1:10.10.2+maria~ubu2204' socket: '/run/mysqld/mysqld.sock' port: 3306 mariadb.org binary distribution 23:16:46 mariadb | 2024-03-11 23:14:14 0 [Note] InnoDB: Buffer pool(s) load completed at 240311 23:14:14 23:16:46 mariadb | 2024-03-11 23:14:15 3 [Warning] Aborted connection 3 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.11' (This connection closed normally without authentication) 23:16:46 mariadb | 2024-03-11 23:14:15 4 [Warning] Aborted connection 4 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.7' (This connection closed normally without authentication) 23:16:46 mariadb | 2024-03-11 23:14:15 5 [Warning] Aborted connection 5 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.9' (This connection closed normally without authentication) 23:16:46 mariadb | 2024-03-11 23:14:15 6 [Warning] Aborted connection 6 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.8' (This connection closed normally without authentication) 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.63337427Z level=info msg="Executing migration" id="add index annotation 1 v3" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.634356435Z level=info msg="Migration successfully executed" id="add index annotation 1 v3" duration=982.285µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.640111192Z level=info msg="Executing migration" id="add index annotation 2 v3" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.641174589Z level=info msg="Migration successfully executed" id="add index annotation 2 v3" duration=1.063607ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.64513138Z level=info msg="Executing migration" id="add index annotation 3 v3" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.646823253Z level=info msg="Migration successfully executed" id="add index annotation 3 v3" duration=1.690883ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.653841813Z level=info msg="Executing migration" id="add index annotation 4 v3" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.65569316Z level=info msg="Migration successfully executed" id="add index annotation 4 v3" duration=1.851577ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.659916068Z level=info msg="Executing migration" id="Update annotation table charset" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.660042511Z level=info msg="Migration successfully executed" id="Update annotation table charset" duration=126.093µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.663780106Z level=info msg="Executing migration" id="Add column region_id to annotation table" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.670539689Z level=info msg="Migration successfully executed" id="Add column region_id to annotation table" duration=6.758503ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.677088326Z level=info msg="Executing migration" id="Drop category_id index" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.678074081Z level=info msg="Migration successfully executed" id="Drop category_id index" duration=987.895µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.682273819Z level=info msg="Executing migration" id="Add column tags to annotation table" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.689053521Z level=info msg="Migration successfully executed" id="Add column tags to annotation table" duration=6.779182ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.693084195Z level=info msg="Executing migration" id="Create annotation_tag table v2" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.693911396Z level=info msg="Migration successfully executed" id="Create annotation_tag table v2" duration=826.591µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.700092073Z level=info msg="Executing migration" id="Add unique index annotation_tag.annotation_id_tag_id" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.701797327Z level=info msg="Migration successfully executed" id="Add unique index annotation_tag.annotation_id_tag_id" duration=1.705964ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.705794219Z level=info msg="Executing migration" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.707302088Z level=info msg="Migration successfully executed" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" duration=1.508099ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.711220468Z level=info msg="Executing migration" id="Rename table annotation_tag to annotation_tag_v2 - v2" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.723103481Z level=info msg="Migration successfully executed" id="Rename table annotation_tag to annotation_tag_v2 - v2" duration=11.877983ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.760015333Z level=info msg="Executing migration" id="Create annotation_tag table v3" 23:16:46 policy-apex-pdp | Waiting for mariadb port 3306... 23:16:46 policy-apex-pdp | mariadb (172.17.0.2:3306) open 23:16:46 policy-apex-pdp | Waiting for kafka port 9092... 23:16:46 policy-apex-pdp | kafka (172.17.0.6:9092) open 23:16:46 policy-apex-pdp | Waiting for pap port 6969... 23:16:46 policy-apex-pdp | pap (172.17.0.9:6969) open 23:16:46 policy-apex-pdp | apexApps.sh: running application 'onappf' with command 'java -Dlogback.configurationFile=/opt/app/policy/apex-pdp/etc/logback.xml -cp /opt/app/policy/apex-pdp/etc:/opt/app/policy/apex-pdp/etc/hazelcast:/opt/app/policy/apex-pdp/etc/infinispan:/opt/app/policy/apex-pdp/lib/* -Djavax.net.ssl.keyStore=/opt/app/policy/apex-pdp/etc/ssl/policy-keystore -Djavax.net.ssl.keyStorePassword=Pol1cy_0nap -Djavax.net.ssl.trustStore=/opt/app/policy/apex-pdp/etc/ssl/policy-truststore -Djavax.net.ssl.trustStorePassword=Pol1cy_0nap -Dlogback.configurationFile=/opt/app/policy/apex-pdp/etc/logback.xml -Dhazelcast.config=/opt/app/policy/apex-pdp/etc/hazelcast.xml -Dhazelcast.mancenter.enabled=false org.onap.policy.apex.services.onappf.ApexStarterMain -c /opt/app/policy/apex-pdp/etc/onappf/config/OnapPfConfig.json' 23:16:46 policy-apex-pdp | [2024-03-11T23:14:49.321+00:00|INFO|ApexStarterMain|main] In ApexStarter with parameters [-c, /opt/app/policy/apex-pdp/etc/onappf/config/OnapPfConfig.json] 23:16:46 policy-apex-pdp | [2024-03-11T23:14:49.565+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 23:16:46 policy-apex-pdp | allow.auto.create.topics = true 23:16:46 policy-apex-pdp | auto.commit.interval.ms = 5000 23:16:46 policy-apex-pdp | auto.include.jmx.reporter = true 23:16:46 policy-apex-pdp | auto.offset.reset = latest 23:16:46 policy-apex-pdp | bootstrap.servers = [kafka:9092] 23:16:46 policy-apex-pdp | check.crcs = true 23:16:46 policy-apex-pdp | client.dns.lookup = use_all_dns_ips 23:16:46 policy-apex-pdp | client.id = consumer-ba46bd84-7ae1-41fa-a3bb-e4918f472988-1 23:16:46 policy-apex-pdp | client.rack = 23:16:46 policy-apex-pdp | connections.max.idle.ms = 540000 23:16:46 policy-apex-pdp | default.api.timeout.ms = 60000 23:16:46 policy-apex-pdp | enable.auto.commit = true 23:16:46 policy-apex-pdp | exclude.internal.topics = true 23:16:46 policy-apex-pdp | fetch.max.bytes = 52428800 23:16:46 policy-apex-pdp | fetch.max.wait.ms = 500 23:16:46 policy-apex-pdp | fetch.min.bytes = 1 23:16:46 policy-apex-pdp | group.id = ba46bd84-7ae1-41fa-a3bb-e4918f472988 23:16:46 policy-apex-pdp | group.instance.id = null 23:16:46 policy-apex-pdp | heartbeat.interval.ms = 3000 23:16:46 policy-apex-pdp | interceptor.classes = [] 23:16:46 policy-apex-pdp | internal.leave.group.on.close = true 23:16:46 policy-apex-pdp | internal.throw.on.fetch.stable.offset.unsupported = false 23:16:46 policy-apex-pdp | isolation.level = read_uncommitted 23:16:46 policy-apex-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:46 policy-apex-pdp | max.partition.fetch.bytes = 1048576 23:16:46 policy-apex-pdp | max.poll.interval.ms = 300000 23:16:46 policy-apex-pdp | max.poll.records = 500 23:16:46 policy-apex-pdp | metadata.max.age.ms = 300000 23:16:46 policy-api | [2024-03-11T23:14:28.100+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... 23:16:46 policy-api | [2024-03-11T23:14:28.199+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.mariadb.jdbc.Connection@3f4f5330 23:16:46 policy-api | [2024-03-11T23:14:28.202+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. 23:16:46 policy-api | [2024-03-11T23:14:30.197+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) 23:16:46 policy-api | [2024-03-11T23:14:30.201+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' 23:16:46 policy-api | [2024-03-11T23:14:31.291+00:00|WARN|ApiDatabaseInitializer|main] Detected multi-versioned type: policytypes/onap.policies.monitoring.tcagen2.v2.yaml 23:16:46 policy-api | [2024-03-11T23:14:32.116+00:00|INFO|ApiDatabaseInitializer|main] Multi-versioned Service Template [onap.policies.Monitoring, onap.policies.monitoring.tcagen2] 23:16:46 policy-api | [2024-03-11T23:14:33.429+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning 23:16:46 policy-api | [2024-03-11T23:14:33.687+00:00|INFO|DefaultSecurityFilterChain|main] Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@d181ca3, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@78b9d614, org.springframework.security.web.context.SecurityContextHolderFilter@fb74661, org.springframework.security.web.header.HeaderWriterFilter@22ccd80f, org.springframework.security.web.authentication.logout.LogoutFilter@17d90f81, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@19e86461, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@6e04275e, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@453ef145, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@63f95ac1, org.springframework.security.web.access.ExceptionTranslationFilter@680f7a5e, org.springframework.security.web.access.intercept.AuthorizationFilter@631f188a] 23:16:46 policy-api | [2024-03-11T23:14:34.503+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoint(s) beneath base path '' 23:16:46 policy-api | [2024-03-11T23:14:34.601+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] 23:16:46 policy-api | [2024-03-11T23:14:34.633+00:00|INFO|TomcatWebServer|main] Tomcat started on port(s): 6969 (http) with context path '/policy/api/v1' 23:16:46 policy-api | [2024-03-11T23:14:34.652+00:00|INFO|PolicyApiApplication|main] Started PolicyApiApplication in 11.584 seconds (process running for 12.234) 23:16:46 policy-api | [2024-03-11T23:14:39.927+00:00|INFO|[/policy/api/v1]|http-nio-6969-exec-2] Initializing Spring DispatcherServlet 'dispatcherServlet' 23:16:46 policy-api | [2024-03-11T23:14:39.927+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Initializing Servlet 'dispatcherServlet' 23:16:46 policy-api | [2024-03-11T23:14:39.930+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Completed initialization in 3 ms 23:16:46 policy-api | [2024-03-11T23:14:54.064+00:00|INFO|OrderedServiceImpl|http-nio-6969-exec-3] ***** OrderedServiceImpl implementers: 23:16:46 policy-api | [] 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.761549513Z level=info msg="Migration successfully executed" id="Create annotation_tag table v3" duration=1.532589ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.766977711Z level=info msg="Executing migration" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.768635674Z level=info msg="Migration successfully executed" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" duration=1.657623ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.772287327Z level=info msg="Executing migration" id="copy annotation_tag v2 to v3" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.772671956Z level=info msg="Migration successfully executed" id="copy annotation_tag v2 to v3" duration=383.119µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.778512565Z level=info msg="Executing migration" id="drop table annotation_tag_v2" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.779209523Z level=info msg="Migration successfully executed" id="drop table annotation_tag_v2" duration=693.618µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.783052672Z level=info msg="Executing migration" id="Update alert annotations and set TEXT to empty" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.783673967Z level=info msg="Migration successfully executed" id="Update alert annotations and set TEXT to empty" duration=622.965µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.787532796Z level=info msg="Executing migration" id="Add created time to annotation table" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.794292739Z level=info msg="Migration successfully executed" id="Add created time to annotation table" duration=6.758923ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.800523737Z level=info msg="Executing migration" id="Add updated time to annotation table" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.803546185Z level=info msg="Migration successfully executed" id="Add updated time to annotation table" duration=3.021788ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.809682231Z level=info msg="Executing migration" id="Add index for created in annotation table" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.810690288Z level=info msg="Migration successfully executed" id="Add index for created in annotation table" duration=1.007826ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.81430967Z level=info msg="Executing migration" id="Add index for updated in annotation table" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.815947501Z level=info msg="Migration successfully executed" id="Add index for updated in annotation table" duration=1.637292ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.821314998Z level=info msg="Executing migration" id="Convert existing annotations from seconds to milliseconds" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.821826432Z level=info msg="Migration successfully executed" id="Convert existing annotations from seconds to milliseconds" duration=510.693µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.827443105Z level=info msg="Executing migration" id="Add epoch_end column" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.831794736Z level=info msg="Migration successfully executed" id="Add epoch_end column" duration=4.350911ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.835310345Z level=info msg="Executing migration" id="Add index for epoch_end" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.836384423Z level=info msg="Migration successfully executed" id="Add index for epoch_end" duration=1.071568ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.844140511Z level=info msg="Executing migration" id="Make epoch_end the same as epoch" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.844768467Z level=info msg="Migration successfully executed" id="Make epoch_end the same as epoch" duration=634.206µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.854610658Z level=info msg="Executing migration" id="Move region to single row" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.855538873Z level=info msg="Migration successfully executed" id="Move region to single row" duration=927.954µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.859098503Z level=info msg="Executing migration" id="Remove index org_id_epoch from annotation table" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.860299823Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch from annotation table" duration=1.20062ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.86447636Z level=info msg="Executing migration" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.866330267Z level=info msg="Migration successfully executed" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" duration=1.852347ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.872028523Z level=info msg="Executing migration" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.873197503Z level=info msg="Migration successfully executed" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" duration=1.16774ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.876240371Z level=info msg="Executing migration" id="Add index for org_id_epoch_end_epoch on annotation table" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.877259196Z level=info msg="Migration successfully executed" id="Add index for org_id_epoch_end_epoch on annotation table" duration=1.018655ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.88013593Z level=info msg="Executing migration" id="Remove index org_id_epoch_epoch_end from annotation table" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.881137206Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch_epoch_end from annotation table" duration=1.001036ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.891796368Z level=info msg="Executing migration" id="Add index for alert_id on annotation table" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.892853385Z level=info msg="Migration successfully executed" id="Add index for alert_id on annotation table" duration=1.055307ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.897863953Z level=info msg="Executing migration" id="Increase tags column to length 4096" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.898075758Z level=info msg="Migration successfully executed" id="Increase tags column to length 4096" duration=210.155µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.901797373Z level=info msg="Executing migration" id="create test_data table" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.903328973Z level=info msg="Migration successfully executed" id="create test_data table" duration=1.530829ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.912431294Z level=info msg="Executing migration" id="create dashboard_version table v1" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.913781138Z level=info msg="Migration successfully executed" id="create dashboard_version table v1" duration=1.348684ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.91734104Z level=info msg="Executing migration" id="add index dashboard_version.dashboard_id" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.91891325Z level=info msg="Migration successfully executed" id="add index dashboard_version.dashboard_id" duration=1.57515ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.922659846Z level=info msg="Executing migration" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.923657831Z level=info msg="Migration successfully executed" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" duration=993.915µs 23:16:46 policy-apex-pdp | metric.reporters = [] 23:16:46 policy-apex-pdp | metrics.num.samples = 2 23:16:46 policy-apex-pdp | metrics.recording.level = INFO 23:16:46 policy-apex-pdp | metrics.sample.window.ms = 30000 23:16:46 policy-apex-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 23:16:46 policy-apex-pdp | receive.buffer.bytes = 65536 23:16:46 policy-apex-pdp | reconnect.backoff.max.ms = 1000 23:16:46 policy-apex-pdp | reconnect.backoff.ms = 50 23:16:46 policy-apex-pdp | request.timeout.ms = 30000 23:16:46 policy-apex-pdp | retry.backoff.ms = 100 23:16:46 policy-apex-pdp | sasl.client.callback.handler.class = null 23:16:46 policy-apex-pdp | sasl.jaas.config = null 23:16:46 policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:46 policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 23:16:46 policy-apex-pdp | sasl.kerberos.service.name = null 23:16:46 policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:46 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:46 policy-apex-pdp | sasl.login.callback.handler.class = null 23:16:46 policy-apex-pdp | sasl.login.class = null 23:16:46 policy-apex-pdp | sasl.login.connect.timeout.ms = null 23:16:46 policy-apex-pdp | sasl.login.read.timeout.ms = null 23:16:46 policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 23:16:46 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 23:16:46 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 23:16:46 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 23:16:46 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 23:16:46 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 23:16:46 policy-apex-pdp | sasl.mechanism = GSSAPI 23:16:46 policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 23:16:46 policy-apex-pdp | sasl.oauthbearer.expected.audience = null 23:16:46 policy-apex-pdp | sasl.oauthbearer.expected.issuer = null 23:16:46 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:46 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:46 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:46 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null 23:16:46 policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope 23:16:46 policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub 23:16:46 policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null 23:16:46 policy-apex-pdp | security.protocol = PLAINTEXT 23:16:46 policy-apex-pdp | security.providers = null 23:16:46 policy-apex-pdp | send.buffer.bytes = 131072 23:16:46 policy-apex-pdp | session.timeout.ms = 45000 23:16:46 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 23:16:46 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 23:16:46 policy-apex-pdp | ssl.cipher.suites = null 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.931706867Z level=info msg="Executing migration" id="Set dashboard version to 1 where 0" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.932172418Z level=info msg="Migration successfully executed" id="Set dashboard version to 1 where 0" duration=466.961µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.938962472Z level=info msg="Executing migration" id="save existing dashboard data in dashboard_version table v1" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.939642Z level=info msg="Migration successfully executed" id="save existing dashboard data in dashboard_version table v1" duration=676.549µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.94359427Z level=info msg="Executing migration" id="alter dashboard_version.data to mediumtext v1" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.943825046Z level=info msg="Migration successfully executed" id="alter dashboard_version.data to mediumtext v1" duration=187.525µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.947628333Z level=info msg="Executing migration" id="create team table" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.948567247Z level=info msg="Migration successfully executed" id="create team table" duration=939.414µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.955072183Z level=info msg="Executing migration" id="add index team.org_id" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.956977382Z level=info msg="Migration successfully executed" id="add index team.org_id" duration=1.903809ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.960955933Z level=info msg="Executing migration" id="add unique index team_org_id_name" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.962580465Z level=info msg="Migration successfully executed" id="add unique index team_org_id_name" duration=1.626572ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.966368002Z level=info msg="Executing migration" id="Add column uid in team" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.970923048Z level=info msg="Migration successfully executed" id="Add column uid in team" duration=4.554537ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.976293135Z level=info msg="Executing migration" id="Update uid column values in team" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.976578322Z level=info msg="Migration successfully executed" id="Update uid column values in team" duration=283.817µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.981335603Z level=info msg="Executing migration" id="Add unique index team_org_id_uid" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.982997706Z level=info msg="Migration successfully executed" id="Add unique index team_org_id_uid" duration=1.665553ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.986564917Z level=info msg="Executing migration" id="create team member table" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.988065006Z level=info msg="Migration successfully executed" id="create team member table" duration=1.498689ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.995509786Z level=info msg="Executing migration" id="add index team_member.org_id" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:15.996566673Z level=info msg="Migration successfully executed" id="add index team_member.org_id" duration=1.057187ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.00001556Z level=info msg="Executing migration" id="add unique index team_member_org_id_team_id_user_id" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.001089658Z level=info msg="Migration successfully executed" id="add unique index team_member_org_id_team_id_user_id" duration=1.073208ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.005179812Z level=info msg="Executing migration" id="add index team_member.team_id" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.006768613Z level=info msg="Migration successfully executed" id="add index team_member.team_id" duration=1.588471ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.016249315Z level=info msg="Executing migration" id="Add column email to team table" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.02387443Z level=info msg="Migration successfully executed" id="Add column email to team table" duration=7.622525ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.028001755Z level=info msg="Executing migration" id="Add column external to team_member table" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.03135086Z level=info msg="Migration successfully executed" id="Add column external to team_member table" duration=3.347945ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.03522744Z level=info msg="Executing migration" id="Add column permission to team_member table" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.039815116Z level=info msg="Migration successfully executed" id="Add column permission to team_member table" duration=4.586886ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.0450408Z level=info msg="Executing migration" id="create dashboard acl table" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.046061476Z level=info msg="Migration successfully executed" id="create dashboard acl table" duration=1.019546ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.053587718Z level=info msg="Executing migration" id="add index dashboard_acl_dashboard_id" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.05518876Z level=info msg="Migration successfully executed" id="add index dashboard_acl_dashboard_id" duration=1.600261ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.062019053Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_user_id" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.063923972Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_user_id" duration=1.899939ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.071804534Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_team_id" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.072902312Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_team_id" duration=1.097397ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.076259487Z level=info msg="Executing migration" id="add index dashboard_acl_user_id" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.077311844Z level=info msg="Migration successfully executed" id="add index dashboard_acl_user_id" duration=1.051677ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.080934687Z level=info msg="Executing migration" id="add index dashboard_acl_team_id" 23:16:46 kafka | initial.broker.registration.timeout.ms = 60000 23:16:46 kafka | inter.broker.listener.name = PLAINTEXT 23:16:46 kafka | inter.broker.protocol.version = 3.6-IV2 23:16:46 kafka | kafka.metrics.polling.interval.secs = 10 23:16:46 kafka | kafka.metrics.reporters = [] 23:16:46 kafka | leader.imbalance.check.interval.seconds = 300 23:16:46 kafka | leader.imbalance.per.broker.percentage = 10 23:16:46 kafka | listener.security.protocol.map = PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT 23:16:46 kafka | listeners = PLAINTEXT://0.0.0.0:9092,PLAINTEXT_HOST://0.0.0.0:29092 23:16:46 kafka | log.cleaner.backoff.ms = 15000 23:16:46 kafka | log.cleaner.dedupe.buffer.size = 134217728 23:16:46 kafka | log.cleaner.delete.retention.ms = 86400000 23:16:46 kafka | log.cleaner.enable = true 23:16:46 kafka | log.cleaner.io.buffer.load.factor = 0.9 23:16:46 kafka | log.cleaner.io.buffer.size = 524288 23:16:46 kafka | log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 23:16:46 kafka | log.cleaner.max.compaction.lag.ms = 9223372036854775807 23:16:46 kafka | log.cleaner.min.cleanable.ratio = 0.5 23:16:46 kafka | log.cleaner.min.compaction.lag.ms = 0 23:16:46 kafka | log.cleaner.threads = 1 23:16:46 kafka | log.cleanup.policy = [delete] 23:16:46 kafka | log.dir = /tmp/kafka-logs 23:16:46 kafka | log.dirs = /var/lib/kafka/data 23:16:46 kafka | log.flush.interval.messages = 9223372036854775807 23:16:46 kafka | log.flush.interval.ms = null 23:16:46 kafka | log.flush.offset.checkpoint.interval.ms = 60000 23:16:46 kafka | log.flush.scheduler.interval.ms = 9223372036854775807 23:16:46 kafka | log.flush.start.offset.checkpoint.interval.ms = 60000 23:16:46 kafka | log.index.interval.bytes = 4096 23:16:46 kafka | log.index.size.max.bytes = 10485760 23:16:46 kafka | log.local.retention.bytes = -2 23:16:46 kafka | log.local.retention.ms = -2 23:16:46 kafka | log.message.downconversion.enable = true 23:16:46 kafka | log.message.format.version = 3.0-IV1 23:16:46 kafka | log.message.timestamp.after.max.ms = 9223372036854775807 23:16:46 kafka | log.message.timestamp.before.max.ms = 9223372036854775807 23:16:46 kafka | log.message.timestamp.difference.max.ms = 9223372036854775807 23:16:46 kafka | log.message.timestamp.type = CreateTime 23:16:46 kafka | log.preallocate = false 23:16:46 kafka | log.retention.bytes = -1 23:16:46 kafka | log.retention.check.interval.ms = 300000 23:16:46 kafka | log.retention.hours = 168 23:16:46 kafka | log.retention.minutes = null 23:16:46 kafka | log.retention.ms = null 23:16:46 kafka | log.roll.hours = 168 23:16:46 policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:46 policy-apex-pdp | ssl.endpoint.identification.algorithm = https 23:16:46 policy-apex-pdp | ssl.engine.factory.class = null 23:16:46 policy-apex-pdp | ssl.key.password = null 23:16:46 policy-apex-pdp | ssl.keymanager.algorithm = SunX509 23:16:46 policy-apex-pdp | ssl.keystore.certificate.chain = null 23:16:46 policy-apex-pdp | ssl.keystore.key = null 23:16:46 policy-apex-pdp | ssl.keystore.location = null 23:16:46 policy-apex-pdp | ssl.keystore.password = null 23:16:46 policy-apex-pdp | ssl.keystore.type = JKS 23:16:46 policy-apex-pdp | ssl.protocol = TLSv1.3 23:16:46 policy-apex-pdp | ssl.provider = null 23:16:46 policy-apex-pdp | ssl.secure.random.implementation = null 23:16:46 policy-apex-pdp | ssl.trustmanager.algorithm = PKIX 23:16:46 policy-apex-pdp | ssl.truststore.certificates = null 23:16:46 policy-apex-pdp | ssl.truststore.location = null 23:16:46 policy-apex-pdp | ssl.truststore.password = null 23:16:46 policy-apex-pdp | ssl.truststore.type = JKS 23:16:46 policy-apex-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:46 policy-apex-pdp | 23:16:46 policy-apex-pdp | [2024-03-11T23:14:49.726+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 23:16:46 policy-apex-pdp | [2024-03-11T23:14:49.727+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 23:16:46 policy-apex-pdp | [2024-03-11T23:14:49.727+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1710198889725 23:16:46 policy-apex-pdp | [2024-03-11T23:14:49.729+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-ba46bd84-7ae1-41fa-a3bb-e4918f472988-1, groupId=ba46bd84-7ae1-41fa-a3bb-e4918f472988] Subscribed to topic(s): policy-pdp-pap 23:16:46 policy-apex-pdp | [2024-03-11T23:14:49.742+00:00|INFO|ServiceManager|main] service manager starting 23:16:46 policy-apex-pdp | [2024-03-11T23:14:49.742+00:00|INFO|ServiceManager|main] service manager starting topics 23:16:46 policy-apex-pdp | [2024-03-11T23:14:49.745+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=ba46bd84-7ae1-41fa-a3bb-e4918f472988, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: starting 23:16:46 policy-apex-pdp | [2024-03-11T23:14:49.765+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 23:16:46 policy-apex-pdp | allow.auto.create.topics = true 23:16:46 policy-apex-pdp | auto.commit.interval.ms = 5000 23:16:46 policy-apex-pdp | auto.include.jmx.reporter = true 23:16:46 policy-apex-pdp | auto.offset.reset = latest 23:16:46 policy-apex-pdp | bootstrap.servers = [kafka:9092] 23:16:46 policy-apex-pdp | check.crcs = true 23:16:46 policy-apex-pdp | client.dns.lookup = use_all_dns_ips 23:16:46 policy-apex-pdp | client.id = consumer-ba46bd84-7ae1-41fa-a3bb-e4918f472988-2 23:16:46 policy-apex-pdp | client.rack = 23:16:46 policy-db-migrator | Waiting for mariadb port 3306... 23:16:46 policy-db-migrator | nc: connect to mariadb (172.17.0.2) port 3306 (tcp) failed: Connection refused 23:16:46 policy-db-migrator | nc: connect to mariadb (172.17.0.2) port 3306 (tcp) failed: Connection refused 23:16:46 policy-db-migrator | nc: connect to mariadb (172.17.0.2) port 3306 (tcp) failed: Connection refused 23:16:46 policy-db-migrator | nc: connect to mariadb (172.17.0.2) port 3306 (tcp) failed: Connection refused 23:16:46 policy-db-migrator | nc: connect to mariadb (172.17.0.2) port 3306 (tcp) failed: Connection refused 23:16:46 policy-db-migrator | nc: connect to mariadb (172.17.0.2) port 3306 (tcp) failed: Connection refused 23:16:46 policy-db-migrator | Connection to mariadb (172.17.0.2) 3306 port [tcp/mysql] succeeded! 23:16:46 policy-db-migrator | 321 blocks 23:16:46 policy-db-migrator | Preparing upgrade release version: 0800 23:16:46 policy-db-migrator | Preparing upgrade release version: 0900 23:16:46 policy-db-migrator | Preparing upgrade release version: 1000 23:16:46 policy-db-migrator | Preparing upgrade release version: 1100 23:16:46 policy-db-migrator | Preparing upgrade release version: 1200 23:16:46 policy-db-migrator | Preparing upgrade release version: 1300 23:16:46 policy-db-migrator | Done 23:16:46 policy-db-migrator | name version 23:16:46 policy-db-migrator | policyadmin 0 23:16:46 policy-db-migrator | policyadmin: upgrade available: 0 -> 1300 23:16:46 policy-db-migrator | upgrade: 0 -> 1300 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | > upgrade 0100-jpapdpgroup_properties.sql 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpgroup_properties (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PROPERTIES VARCHAR(255) DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) DEFAULT NULL) 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpstatistics_enginestats (AVERAGEEXECUTIONTIME DOUBLE DEFAULT NULL, ENGINEID VARCHAR(255) DEFAULT NULL, ENGINETIMESTAMP BIGINT DEFAULT NULL, ENGINEWORKERSTATE INT DEFAULT NULL, EVENTCOUNT BIGINT DEFAULT NULL, LASTENTERTIME BIGINT DEFAULT NULL, LASTEXECUTIONTIME BIGINT DEFAULT NULL, LASTSTART BIGINT DEFAULT NULL, UPTIME BIGINT DEFAULT NULL, timeStamp datetime DEFAULT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL) 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | 23:16:46 policy-apex-pdp | connections.max.idle.ms = 540000 23:16:46 policy-apex-pdp | default.api.timeout.ms = 60000 23:16:46 policy-apex-pdp | enable.auto.commit = true 23:16:46 policy-apex-pdp | exclude.internal.topics = true 23:16:46 policy-apex-pdp | fetch.max.bytes = 52428800 23:16:46 policy-apex-pdp | fetch.max.wait.ms = 500 23:16:46 policy-apex-pdp | fetch.min.bytes = 1 23:16:46 policy-apex-pdp | group.id = ba46bd84-7ae1-41fa-a3bb-e4918f472988 23:16:46 policy-apex-pdp | group.instance.id = null 23:16:46 policy-apex-pdp | heartbeat.interval.ms = 3000 23:16:46 policy-apex-pdp | interceptor.classes = [] 23:16:46 policy-apex-pdp | internal.leave.group.on.close = true 23:16:46 policy-apex-pdp | internal.throw.on.fetch.stable.offset.unsupported = false 23:16:46 policy-apex-pdp | isolation.level = read_uncommitted 23:16:46 policy-apex-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:46 policy-apex-pdp | max.partition.fetch.bytes = 1048576 23:16:46 policy-apex-pdp | max.poll.interval.ms = 300000 23:16:46 policy-apex-pdp | max.poll.records = 500 23:16:46 policy-apex-pdp | metadata.max.age.ms = 300000 23:16:46 policy-apex-pdp | metric.reporters = [] 23:16:46 policy-apex-pdp | metrics.num.samples = 2 23:16:46 policy-apex-pdp | metrics.recording.level = INFO 23:16:46 policy-apex-pdp | metrics.sample.window.ms = 30000 23:16:46 policy-apex-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 23:16:46 policy-apex-pdp | receive.buffer.bytes = 65536 23:16:46 policy-apex-pdp | reconnect.backoff.max.ms = 1000 23:16:46 policy-apex-pdp | reconnect.backoff.ms = 50 23:16:46 policy-apex-pdp | request.timeout.ms = 30000 23:16:46 policy-apex-pdp | retry.backoff.ms = 100 23:16:46 policy-apex-pdp | sasl.client.callback.handler.class = null 23:16:46 policy-apex-pdp | sasl.jaas.config = null 23:16:46 policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:46 policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 23:16:46 policy-apex-pdp | sasl.kerberos.service.name = null 23:16:46 policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:46 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:46 policy-apex-pdp | sasl.login.callback.handler.class = null 23:16:46 policy-apex-pdp | sasl.login.class = null 23:16:46 policy-apex-pdp | sasl.login.connect.timeout.ms = null 23:16:46 policy-apex-pdp | sasl.login.read.timeout.ms = null 23:16:46 policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 23:16:46 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 23:16:46 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 23:16:46 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 23:16:46 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 23:16:46 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 23:16:46 policy-apex-pdp | sasl.mechanism = GSSAPI 23:16:46 policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 23:16:46 policy-apex-pdp | sasl.oauthbearer.expected.audience = null 23:16:46 policy-apex-pdp | sasl.oauthbearer.expected.issuer = null 23:16:46 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:46 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:46 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:46 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null 23:16:46 policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope 23:16:46 policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub 23:16:46 policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null 23:16:46 policy-apex-pdp | security.protocol = PLAINTEXT 23:16:46 policy-apex-pdp | security.providers = null 23:16:46 policy-apex-pdp | send.buffer.bytes = 131072 23:16:46 policy-apex-pdp | session.timeout.ms = 45000 23:16:46 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 23:16:46 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 23:16:46 policy-apex-pdp | ssl.cipher.suites = null 23:16:46 policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:46 policy-apex-pdp | ssl.endpoint.identification.algorithm = https 23:16:46 policy-apex-pdp | ssl.engine.factory.class = null 23:16:46 policy-apex-pdp | ssl.key.password = null 23:16:46 policy-apex-pdp | ssl.keymanager.algorithm = SunX509 23:16:46 policy-apex-pdp | ssl.keystore.certificate.chain = null 23:16:46 policy-apex-pdp | ssl.keystore.key = null 23:16:46 policy-apex-pdp | ssl.keystore.location = null 23:16:46 policy-apex-pdp | ssl.keystore.password = null 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.081991333Z level=info msg="Migration successfully executed" id="add index dashboard_acl_team_id" duration=1.055807ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.088936601Z level=info msg="Executing migration" id="add index dashboard_acl_org_id_role" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.090549992Z level=info msg="Migration successfully executed" id="add index dashboard_acl_org_id_role" duration=1.611171ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.096060802Z level=info msg="Executing migration" id="add index dashboard_permission" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.097388087Z level=info msg="Migration successfully executed" id="add index dashboard_permission" duration=1.327345ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.137652745Z level=info msg="Executing migration" id="save default acl rules in dashboard_acl table" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.138644581Z level=info msg="Migration successfully executed" id="save default acl rules in dashboard_acl table" duration=991.896µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.14569442Z level=info msg="Executing migration" id="delete acl rules for deleted dashboards and folders" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.146227013Z level=info msg="Migration successfully executed" id="delete acl rules for deleted dashboards and folders" duration=531.653µs 23:16:46 policy-apex-pdp | ssl.keystore.type = JKS 23:16:46 policy-apex-pdp | ssl.protocol = TLSv1.3 23:16:46 policy-apex-pdp | ssl.provider = null 23:16:46 policy-apex-pdp | ssl.secure.random.implementation = null 23:16:46 policy-apex-pdp | ssl.trustmanager.algorithm = PKIX 23:16:46 policy-apex-pdp | ssl.truststore.certificates = null 23:16:46 policy-apex-pdp | ssl.truststore.location = null 23:16:46 policy-apex-pdp | ssl.truststore.password = null 23:16:46 policy-apex-pdp | ssl.truststore.type = JKS 23:16:46 policy-apex-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:46 policy-apex-pdp | 23:16:46 policy-apex-pdp | [2024-03-11T23:14:49.773+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 23:16:46 policy-apex-pdp | [2024-03-11T23:14:49.773+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 23:16:46 policy-apex-pdp | [2024-03-11T23:14:49.773+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1710198889773 23:16:46 policy-apex-pdp | [2024-03-11T23:14:49.774+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-ba46bd84-7ae1-41fa-a3bb-e4918f472988-2, groupId=ba46bd84-7ae1-41fa-a3bb-e4918f472988] Subscribed to topic(s): policy-pdp-pap 23:16:46 policy-apex-pdp | [2024-03-11T23:14:49.775+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=209ec75e-50c9-4128-a93d-88ee98cd58de, alive=false, publisher=null]]: starting 23:16:46 policy-apex-pdp | [2024-03-11T23:14:49.785+00:00|INFO|ProducerConfig|main] ProducerConfig values: 23:16:46 policy-apex-pdp | acks = -1 23:16:46 policy-apex-pdp | auto.include.jmx.reporter = true 23:16:46 policy-apex-pdp | batch.size = 16384 23:16:46 policy-apex-pdp | bootstrap.servers = [kafka:9092] 23:16:46 policy-apex-pdp | buffer.memory = 33554432 23:16:46 policy-apex-pdp | client.dns.lookup = use_all_dns_ips 23:16:46 policy-apex-pdp | client.id = producer-1 23:16:46 policy-apex-pdp | compression.type = none 23:16:46 policy-apex-pdp | connections.max.idle.ms = 540000 23:16:46 policy-apex-pdp | delivery.timeout.ms = 120000 23:16:46 policy-apex-pdp | enable.idempotence = true 23:16:46 policy-apex-pdp | interceptor.classes = [] 23:16:46 policy-apex-pdp | key.serializer = class org.apache.kafka.common.serialization.StringSerializer 23:16:46 policy-apex-pdp | linger.ms = 0 23:16:46 policy-apex-pdp | max.block.ms = 60000 23:16:46 policy-apex-pdp | max.in.flight.requests.per.connection = 5 23:16:46 policy-apex-pdp | max.request.size = 1048576 23:16:46 policy-apex-pdp | metadata.max.age.ms = 300000 23:16:46 policy-apex-pdp | metadata.max.idle.ms = 300000 23:16:46 policy-apex-pdp | metric.reporters = [] 23:16:46 policy-apex-pdp | metrics.num.samples = 2 23:16:46 policy-apex-pdp | metrics.recording.level = INFO 23:16:46 policy-apex-pdp | metrics.sample.window.ms = 30000 23:16:46 policy-apex-pdp | partitioner.adaptive.partitioning.enable = true 23:16:46 policy-apex-pdp | partitioner.availability.timeout.ms = 0 23:16:46 policy-apex-pdp | partitioner.class = null 23:16:46 policy-apex-pdp | partitioner.ignore.keys = false 23:16:46 policy-apex-pdp | receive.buffer.bytes = 32768 23:16:46 policy-apex-pdp | reconnect.backoff.max.ms = 1000 23:16:46 policy-apex-pdp | reconnect.backoff.ms = 50 23:16:46 policy-apex-pdp | request.timeout.ms = 30000 23:16:46 policy-apex-pdp | retries = 2147483647 23:16:46 policy-apex-pdp | retry.backoff.ms = 100 23:16:46 policy-apex-pdp | sasl.client.callback.handler.class = null 23:16:46 policy-apex-pdp | sasl.jaas.config = null 23:16:46 policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:46 policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 23:16:46 policy-apex-pdp | sasl.kerberos.service.name = null 23:16:46 policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:46 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:46 policy-apex-pdp | sasl.login.callback.handler.class = null 23:16:46 policy-apex-pdp | sasl.login.class = null 23:16:46 policy-apex-pdp | sasl.login.connect.timeout.ms = null 23:16:46 policy-apex-pdp | sasl.login.read.timeout.ms = null 23:16:46 policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 23:16:46 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 23:16:46 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 23:16:46 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 23:16:46 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 23:16:46 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 23:16:46 policy-apex-pdp | sasl.mechanism = GSSAPI 23:16:46 policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 23:16:46 policy-apex-pdp | sasl.oauthbearer.expected.audience = null 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | > upgrade 0120-jpapdpsubgroup_policies.sql 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_policies (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL) 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | > upgrade 0130-jpapdpsubgroup_properties.sql 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_properties (parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL, PROPERTIES VARCHAR(255) DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) DEFAULT NULL) 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | > upgrade 0140-jpapdpsubgroup_supportedpolicytypes.sql 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_supportedpolicytypes (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL) 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | > upgrade 0150-jpatoscacapabilityassignment_attributes.sql 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_attributes (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, ATTRIBUTES LONGTEXT DEFAULT NULL, ATTRIBUTES_KEY VARCHAR(255) DEFAULT NULL) 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | > upgrade 0160-jpatoscacapabilityassignment_metadata.sql 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | > upgrade 0170-jpatoscacapabilityassignment_occurrences.sql 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_occurrences (name VARCHAR(120) NULL, version VARCHAR(20) NULL, OCCURRENCES INT DEFAULT NULL) 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | > upgrade 0180-jpatoscacapabilityassignment_properties.sql 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | > upgrade 0190-jpatoscacapabilitytype_metadata.sql 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilitytype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:46 policy-apex-pdp | sasl.oauthbearer.expected.issuer = null 23:16:46 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:46 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:46 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:46 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null 23:16:46 policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope 23:16:46 policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub 23:16:46 policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null 23:16:46 kafka | log.roll.jitter.hours = 0 23:16:46 kafka | log.roll.jitter.ms = null 23:16:46 kafka | log.roll.ms = null 23:16:46 kafka | log.segment.bytes = 1073741824 23:16:46 kafka | log.segment.delete.delay.ms = 60000 23:16:46 kafka | max.connection.creation.rate = 2147483647 23:16:46 kafka | max.connections = 2147483647 23:16:46 kafka | max.connections.per.ip = 2147483647 23:16:46 kafka | max.connections.per.ip.overrides = 23:16:46 kafka | max.incremental.fetch.session.cache.slots = 1000 23:16:46 kafka | message.max.bytes = 1048588 23:16:46 kafka | metadata.log.dir = null 23:16:46 kafka | metadata.log.max.record.bytes.between.snapshots = 20971520 23:16:46 kafka | metadata.log.max.snapshot.interval.ms = 3600000 23:16:46 kafka | metadata.log.segment.bytes = 1073741824 23:16:46 kafka | metadata.log.segment.min.bytes = 8388608 23:16:46 kafka | metadata.log.segment.ms = 604800000 23:16:46 kafka | metadata.max.idle.interval.ms = 500 23:16:46 kafka | metadata.max.retention.bytes = 104857600 23:16:46 kafka | metadata.max.retention.ms = 604800000 23:16:46 kafka | metric.reporters = [] 23:16:46 kafka | metrics.num.samples = 2 23:16:46 kafka | metrics.recording.level = INFO 23:16:46 kafka | metrics.sample.window.ms = 30000 23:16:46 kafka | min.insync.replicas = 1 23:16:46 kafka | node.id = 1 23:16:46 kafka | num.io.threads = 8 23:16:46 kafka | num.network.threads = 3 23:16:46 prometheus | ts=2024-03-11T23:14:12.552Z caller=main.go:564 level=info msg="No time or size retention was set so using the default time retention" duration=15d 23:16:46 prometheus | ts=2024-03-11T23:14:12.552Z caller=main.go:608 level=info msg="Starting Prometheus Server" mode=server version="(version=2.50.1, branch=HEAD, revision=8c9b0285360a0b6288d76214a75ce3025bce4050)" 23:16:46 prometheus | ts=2024-03-11T23:14:12.552Z caller=main.go:613 level=info build_context="(go=go1.21.7, platform=linux/amd64, user=root@6213bb3ee580, date=20240226-11:36:26, tags=netgo,builtinassets,stringlabels)" 23:16:46 prometheus | ts=2024-03-11T23:14:12.552Z caller=main.go:614 level=info host_details="(Linux 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 prometheus (none))" 23:16:46 prometheus | ts=2024-03-11T23:14:12.552Z caller=main.go:615 level=info fd_limits="(soft=1048576, hard=1048576)" 23:16:46 prometheus | ts=2024-03-11T23:14:12.552Z caller=main.go:616 level=info vm_limits="(soft=unlimited, hard=unlimited)" 23:16:46 prometheus | ts=2024-03-11T23:14:12.559Z caller=web.go:565 level=info component=web msg="Start listening for connections" address=0.0.0.0:9090 23:16:46 prometheus | ts=2024-03-11T23:14:12.560Z caller=main.go:1118 level=info msg="Starting TSDB ..." 23:16:46 prometheus | ts=2024-03-11T23:14:12.562Z caller=tls_config.go:313 level=info component=web msg="Listening on" address=[::]:9090 23:16:46 prometheus | ts=2024-03-11T23:14:12.562Z caller=tls_config.go:316 level=info component=web msg="TLS is disabled." http2=false address=[::]:9090 23:16:46 prometheus | ts=2024-03-11T23:14:12.563Z caller=head.go:610 level=info component=tsdb msg="Replaying on-disk memory mappable chunks if any" 23:16:46 prometheus | ts=2024-03-11T23:14:12.563Z caller=head.go:692 level=info component=tsdb msg="On-disk memory mappable chunks replay completed" duration=3.49µs 23:16:46 prometheus | ts=2024-03-11T23:14:12.563Z caller=head.go:700 level=info component=tsdb msg="Replaying WAL, this may take a while" 23:16:46 prometheus | ts=2024-03-11T23:14:12.564Z caller=head.go:771 level=info component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0 23:16:46 prometheus | ts=2024-03-11T23:14:12.564Z caller=head.go:808 level=info component=tsdb msg="WAL replay completed" checkpoint_replay_duration=27.531µs wal_replay_duration=271.777µs wbl_replay_duration=150ns total_replay_duration=323.669µs 23:16:46 prometheus | ts=2024-03-11T23:14:12.573Z caller=main.go:1139 level=info fs_type=EXT4_SUPER_MAGIC 23:16:46 prometheus | ts=2024-03-11T23:14:12.573Z caller=main.go:1142 level=info msg="TSDB started" 23:16:46 prometheus | ts=2024-03-11T23:14:12.573Z caller=main.go:1324 level=info msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml 23:16:46 prometheus | ts=2024-03-11T23:14:12.574Z caller=main.go:1361 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=1.042676ms db_storage=1.35µs remote_storage=2.87µs web_handler=650ns query_engine=1.39µs scrape=322.848µs scrape_sd=120.043µs notify=27.931µs notify_sd=32.481µs rules=1.53µs tracing=5.78µs 23:16:46 prometheus | ts=2024-03-11T23:14:12.574Z caller=main.go:1103 level=info msg="Server is ready to receive web requests." 23:16:46 prometheus | ts=2024-03-11T23:14:12.574Z caller=manager.go:146 level=info component="rule manager" msg="Starting rule manager..." 23:16:46 policy-apex-pdp | security.protocol = PLAINTEXT 23:16:46 policy-apex-pdp | security.providers = null 23:16:46 policy-apex-pdp | send.buffer.bytes = 131072 23:16:46 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 23:16:46 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 23:16:46 policy-apex-pdp | ssl.cipher.suites = null 23:16:46 policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:46 policy-apex-pdp | ssl.endpoint.identification.algorithm = https 23:16:46 policy-apex-pdp | ssl.engine.factory.class = null 23:16:46 policy-apex-pdp | ssl.key.password = null 23:16:46 policy-apex-pdp | ssl.keymanager.algorithm = SunX509 23:16:46 policy-apex-pdp | ssl.keystore.certificate.chain = null 23:16:46 policy-apex-pdp | ssl.keystore.key = null 23:16:46 policy-apex-pdp | ssl.keystore.location = null 23:16:46 policy-apex-pdp | ssl.keystore.password = null 23:16:46 policy-apex-pdp | ssl.keystore.type = JKS 23:16:46 policy-apex-pdp | ssl.protocol = TLSv1.3 23:16:46 policy-apex-pdp | ssl.provider = null 23:16:46 policy-apex-pdp | ssl.secure.random.implementation = null 23:16:46 policy-apex-pdp | ssl.trustmanager.algorithm = PKIX 23:16:46 policy-apex-pdp | ssl.truststore.certificates = null 23:16:46 policy-apex-pdp | ssl.truststore.location = null 23:16:46 policy-apex-pdp | ssl.truststore.password = null 23:16:46 policy-apex-pdp | ssl.truststore.type = JKS 23:16:46 policy-apex-pdp | transaction.timeout.ms = 60000 23:16:46 policy-apex-pdp | transactional.id = null 23:16:46 policy-apex-pdp | value.serializer = class org.apache.kafka.common.serialization.StringSerializer 23:16:46 policy-apex-pdp | 23:16:46 policy-apex-pdp | [2024-03-11T23:14:49.793+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. 23:16:46 policy-apex-pdp | [2024-03-11T23:14:49.806+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 23:16:46 policy-apex-pdp | [2024-03-11T23:14:49.807+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 23:16:46 policy-apex-pdp | [2024-03-11T23:14:49.807+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1710198889806 23:16:46 policy-apex-pdp | [2024-03-11T23:14:49.807+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=209ec75e-50c9-4128-a93d-88ee98cd58de, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created 23:16:46 policy-apex-pdp | [2024-03-11T23:14:49.808+00:00|INFO|ServiceManager|main] service manager starting set alive 23:16:46 policy-apex-pdp | [2024-03-11T23:14:49.808+00:00|INFO|ServiceManager|main] service manager starting register pdp status context object 23:16:46 policy-apex-pdp | [2024-03-11T23:14:49.810+00:00|INFO|ServiceManager|main] service manager starting topic sinks 23:16:46 policy-apex-pdp | [2024-03-11T23:14:49.810+00:00|INFO|ServiceManager|main] service manager starting Pdp Status publisher 23:16:46 policy-apex-pdp | [2024-03-11T23:14:49.811+00:00|INFO|ServiceManager|main] service manager starting Register pdp update listener 23:16:46 policy-apex-pdp | [2024-03-11T23:14:49.812+00:00|INFO|ServiceManager|main] service manager starting Register pdp state change request dispatcher 23:16:46 policy-apex-pdp | [2024-03-11T23:14:49.812+00:00|INFO|ServiceManager|main] service manager starting Message Dispatcher 23:16:46 policy-apex-pdp | [2024-03-11T23:14:49.812+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=ba46bd84-7ae1-41fa-a3bb-e4918f472988, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@e077866 23:16:46 policy-apex-pdp | [2024-03-11T23:14:49.813+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=ba46bd84-7ae1-41fa-a3bb-e4918f472988, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: register: start not attempted 23:16:46 policy-apex-pdp | [2024-03-11T23:14:49.813+00:00|INFO|ServiceManager|main] service manager starting Create REST server 23:16:46 policy-apex-pdp | [2024-03-11T23:14:49.832+00:00|INFO|OrderedServiceImpl|Timer-0] ***** OrderedServiceImpl implementers: 23:16:46 policy-apex-pdp | [] 23:16:46 policy-apex-pdp | [2024-03-11T23:14:49.835+00:00|INFO|network|Timer-0] [OUT|KAFKA|policy-pdp-pap] 23:16:46 policy-pap | Waiting for mariadb port 3306... 23:16:46 policy-pap | mariadb (172.17.0.2:3306) open 23:16:46 policy-pap | Waiting for kafka port 9092... 23:16:46 policy-pap | kafka (172.17.0.6:9092) open 23:16:46 policy-pap | Waiting for api port 6969... 23:16:46 policy-pap | api (172.17.0.8:6969) open 23:16:46 policy-pap | Policy pap config file: /opt/app/policy/pap/etc/papParameters.yaml 23:16:46 policy-pap | PDP group configuration file: /opt/app/policy/pap/etc/mounted/groups.json 23:16:46 policy-pap | 23:16:46 policy-pap | . ____ _ __ _ _ 23:16:46 policy-pap | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ 23:16:46 policy-pap | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ 23:16:46 policy-pap | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) 23:16:46 policy-pap | ' |____| .__|_| |_|_| |_\__, | / / / / 23:16:46 policy-pap | =========|_|==============|___/=/_/_/_/ 23:16:46 policy-pap | :: Spring Boot :: (v3.1.8) 23:16:46 policy-pap | 23:16:46 policy-pap | [2024-03-11T23:14:37.533+00:00|INFO|PolicyPapApplication|main] Starting PolicyPapApplication using Java 17.0.10 with PID 34 (/app/pap.jar started by policy in /opt/app/policy/pap/bin) 23:16:46 policy-pap | [2024-03-11T23:14:37.535+00:00|INFO|PolicyPapApplication|main] No active profile set, falling back to 1 default profile: "default" 23:16:46 policy-pap | [2024-03-11T23:14:39.509+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. 23:16:46 policy-pap | [2024-03-11T23:14:39.608+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 89 ms. Found 7 JPA repository interfaces. 23:16:46 policy-pap | [2024-03-11T23:14:40.068+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.exception.ServiceExceptionHandler 23:16:46 policy-pap | [2024-03-11T23:14:40.069+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.exception.ServiceExceptionHandler 23:16:46 policy-pap | [2024-03-11T23:14:40.931+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port(s): 6969 (http) 23:16:46 policy-pap | [2024-03-11T23:14:40.942+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] 23:16:46 policy-pap | [2024-03-11T23:14:40.944+00:00|INFO|StandardService|main] Starting service [Tomcat] 23:16:46 policy-pap | [2024-03-11T23:14:40.944+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.18] 23:16:46 policy-pap | [2024-03-11T23:14:41.056+00:00|INFO|[/policy/pap/v1]|main] Initializing Spring embedded WebApplicationContext 23:16:46 policy-pap | [2024-03-11T23:14:41.056+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 3439 ms 23:16:46 policy-pap | [2024-03-11T23:14:41.586+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] 23:16:46 policy-pap | [2024-03-11T23:14:41.690+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.3.0.CR1 23:16:46 policy-pap | [2024-03-11T23:14:41.694+00:00|INFO|Environment|main] HHH000406: Using bytecode reflection optimizer 23:16:46 policy-pap | [2024-03-11T23:14:41.741+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled 23:16:46 policy-pap | [2024-03-11T23:14:42.164+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer 23:16:46 policy-pap | [2024-03-11T23:14:42.206+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... 23:16:46 policy-pap | [2024-03-11T23:14:42.329+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.mariadb.jdbc.Connection@2def7a7a 23:16:46 policy-pap | [2024-03-11T23:14:42.331+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. 23:16:46 policy-pap | [2024-03-11T23:14:44.430+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) 23:16:46 policy-pap | [2024-03-11T23:14:44.434+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' 23:16:46 policy-pap | [2024-03-11T23:14:44.989+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PdpGroupRepository 23:16:46 policy-pap | [2024-03-11T23:14:45.480+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PolicyStatusRepository 23:16:46 policy-pap | [2024-03-11T23:14:45.602+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PolicyAuditRepository 23:16:46 policy-pap | [2024-03-11T23:14:45.883+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 23:16:46 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"3265b548-aec6-42e7-a570-bc27acff5fd9","timestampMs":1710198889813,"name":"apex-427b0bc7-703d-4d13-b440-d7b93ca39961","pdpGroup":"defaultGroup"} 23:16:46 policy-apex-pdp | [2024-03-11T23:14:49.963+00:00|INFO|ServiceManager|main] service manager starting Rest Server 23:16:46 policy-apex-pdp | [2024-03-11T23:14:49.963+00:00|INFO|ServiceManager|main] service manager starting 23:16:46 policy-apex-pdp | [2024-03-11T23:14:49.963+00:00|INFO|ServiceManager|main] service manager starting REST RestServerParameters 23:16:46 policy-apex-pdp | [2024-03-11T23:14:49.963+00:00|INFO|JettyServletServer|main] JettyJerseyServer [Jerseyservlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-3591009c==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@634b550e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-64c2b546==org.glassfish.jersey.servlet.ServletContainer@2d1bc350{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=Server@5ebd56e9{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@63f34b70{/,null,STOPPED}, connector=RestServerParameters@5d25e6bb{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=null, servlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-3591009c==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@634b550e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-64c2b546==org.glassfish.jersey.servlet.ServletContainer@2d1bc350{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:16:46 policy-apex-pdp | [2024-03-11T23:14:49.974+00:00|INFO|ServiceManager|main] service manager started 23:16:46 policy-apex-pdp | [2024-03-11T23:14:49.974+00:00|INFO|ServiceManager|main] service manager started 23:16:46 policy-apex-pdp | [2024-03-11T23:14:49.975+00:00|INFO|ApexStarterMain|main] Started policy-apex-pdp service successfully. 23:16:46 policy-apex-pdp | [2024-03-11T23:14:49.975+00:00|INFO|JettyServletServer|RestServerParameters-6969] JettyJerseyServer [Jerseyservlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-3591009c==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@634b550e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-64c2b546==org.glassfish.jersey.servlet.ServletContainer@2d1bc350{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=Server@5ebd56e9{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@63f34b70{/,null,STOPPED}, connector=RestServerParameters@5d25e6bb{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=Thread[RestServerParameters-6969,5,main], servlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-3591009c==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@634b550e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-64c2b546==org.glassfish.jersey.servlet.ServletContainer@2d1bc350{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:16:46 policy-apex-pdp | [2024-03-11T23:14:50.109+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: OdmwtGb8RBC2kzsuX5kwmQ 23:16:46 policy-apex-pdp | [2024-03-11T23:14:50.109+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-ba46bd84-7ae1-41fa-a3bb-e4918f472988-2, groupId=ba46bd84-7ae1-41fa-a3bb-e4918f472988] Cluster ID: OdmwtGb8RBC2kzsuX5kwmQ 23:16:46 policy-apex-pdp | [2024-03-11T23:14:50.111+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 2 with epoch 0 23:16:46 policy-apex-pdp | [2024-03-11T23:14:50.111+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-ba46bd84-7ae1-41fa-a3bb-e4918f472988-2, groupId=ba46bd84-7ae1-41fa-a3bb-e4918f472988] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) 23:16:46 policy-apex-pdp | [2024-03-11T23:14:50.117+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-ba46bd84-7ae1-41fa-a3bb-e4918f472988-2, groupId=ba46bd84-7ae1-41fa-a3bb-e4918f472988] (Re-)joining group 23:16:46 policy-apex-pdp | [2024-03-11T23:14:50.131+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-ba46bd84-7ae1-41fa-a3bb-e4918f472988-2, groupId=ba46bd84-7ae1-41fa-a3bb-e4918f472988] Request joining group due to: need to re-join with the given member-id: consumer-ba46bd84-7ae1-41fa-a3bb-e4918f472988-2-bdcbc697-b8f3-4f61-9d8f-a6ee69f6351a 23:16:46 policy-apex-pdp | [2024-03-11T23:14:50.131+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-ba46bd84-7ae1-41fa-a3bb-e4918f472988-2, groupId=ba46bd84-7ae1-41fa-a3bb-e4918f472988] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) 23:16:46 policy-apex-pdp | [2024-03-11T23:14:50.131+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-ba46bd84-7ae1-41fa-a3bb-e4918f472988-2, groupId=ba46bd84-7ae1-41fa-a3bb-e4918f472988] (Re-)joining group 23:16:46 policy-apex-pdp | [2024-03-11T23:14:50.589+00:00|INFO|GsonMessageBodyHandler|RestServerParameters-6969] Using GSON for REST calls 23:16:46 policy-apex-pdp | [2024-03-11T23:14:50.591+00:00|INFO|YamlMessageBodyHandler|RestServerParameters-6969] Accepting YAML for REST calls 23:16:46 policy-apex-pdp | [2024-03-11T23:14:53.137+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-ba46bd84-7ae1-41fa-a3bb-e4918f472988-2, groupId=ba46bd84-7ae1-41fa-a3bb-e4918f472988] Successfully joined group with generation Generation{generationId=1, memberId='consumer-ba46bd84-7ae1-41fa-a3bb-e4918f472988-2-bdcbc697-b8f3-4f61-9d8f-a6ee69f6351a', protocol='range'} 23:16:46 policy-apex-pdp | [2024-03-11T23:14:53.147+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-ba46bd84-7ae1-41fa-a3bb-e4918f472988-2, groupId=ba46bd84-7ae1-41fa-a3bb-e4918f472988] Finished assignment for group at generation 1: {consumer-ba46bd84-7ae1-41fa-a3bb-e4918f472988-2-bdcbc697-b8f3-4f61-9d8f-a6ee69f6351a=Assignment(partitions=[policy-pdp-pap-0])} 23:16:46 policy-apex-pdp | [2024-03-11T23:14:53.154+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-ba46bd84-7ae1-41fa-a3bb-e4918f472988-2, groupId=ba46bd84-7ae1-41fa-a3bb-e4918f472988] Successfully synced group in generation Generation{generationId=1, memberId='consumer-ba46bd84-7ae1-41fa-a3bb-e4918f472988-2-bdcbc697-b8f3-4f61-9d8f-a6ee69f6351a', protocol='range'} 23:16:46 policy-apex-pdp | [2024-03-11T23:14:53.154+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-ba46bd84-7ae1-41fa-a3bb-e4918f472988-2, groupId=ba46bd84-7ae1-41fa-a3bb-e4918f472988] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) 23:16:46 policy-apex-pdp | [2024-03-11T23:14:53.157+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-ba46bd84-7ae1-41fa-a3bb-e4918f472988-2, groupId=ba46bd84-7ae1-41fa-a3bb-e4918f472988] Adding newly assigned partitions: policy-pdp-pap-0 23:16:46 policy-apex-pdp | [2024-03-11T23:14:53.165+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-ba46bd84-7ae1-41fa-a3bb-e4918f472988-2, groupId=ba46bd84-7ae1-41fa-a3bb-e4918f472988] Found no committed offset for partition policy-pdp-pap-0 23:16:46 policy-apex-pdp | [2024-03-11T23:14:53.182+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-ba46bd84-7ae1-41fa-a3bb-e4918f472988-2, groupId=ba46bd84-7ae1-41fa-a3bb-e4918f472988] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. 23:16:46 policy-apex-pdp | [2024-03-11T23:14:56.169+00:00|INFO|RequestLog|qtp1068445309-33] 172.17.0.5 - policyadmin [11/Mar/2024:23:14:56 +0000] "GET /metrics HTTP/1.1" 200 10644 "-" "Prometheus/2.50.1" 23:16:46 policy-apex-pdp | [2024-03-11T23:15:09.812+00:00|INFO|network|Timer-0] [OUT|KAFKA|policy-pdp-pap] 23:16:46 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"907895e9-f7c7-4aa2-a4fd-84e148a46751","timestampMs":1710198909812,"name":"apex-427b0bc7-703d-4d13-b440-d7b93ca39961","pdpGroup":"defaultGroup"} 23:16:46 policy-apex-pdp | [2024-03-11T23:15:09.828+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:46 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"907895e9-f7c7-4aa2-a4fd-84e148a46751","timestampMs":1710198909812,"name":"apex-427b0bc7-703d-4d13-b440-d7b93ca39961","pdpGroup":"defaultGroup"} 23:16:46 policy-apex-pdp | [2024-03-11T23:15:09.830+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 23:16:46 policy-apex-pdp | [2024-03-11T23:15:09.980+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:46 policy-apex-pdp | {"source":"pap-dd126ff4-bdda-4dfa-b8de-79b15c74268d","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"d8098987-1a06-4b4b-bcac-21f90c18f0d0","timestampMs":1710198909920,"name":"apex-427b0bc7-703d-4d13-b440-d7b93ca39961","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:46 policy-apex-pdp | [2024-03-11T23:15:09.988+00:00|WARN|Registry|KAFKA-source-policy-pdp-pap] replacing previously registered: object:pdp/status/publisher 23:16:46 policy-apex-pdp | [2024-03-11T23:15:09.988+00:00|INFO|network|Timer-1] [OUT|KAFKA|policy-pdp-pap] 23:16:46 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"99559673-7f3e-410b-a70d-cce89103c5e6","timestampMs":1710198909988,"name":"apex-427b0bc7-703d-4d13-b440-d7b93ca39961","pdpGroup":"defaultGroup"} 23:16:46 policy-apex-pdp | [2024-03-11T23:15:09.989+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] 23:16:46 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"d8098987-1a06-4b4b-bcac-21f90c18f0d0","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"ddde4219-3842-42f4-b1e0-15afacfe0147","timestampMs":1710198909989,"name":"apex-427b0bc7-703d-4d13-b440-d7b93ca39961","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:46 policy-apex-pdp | [2024-03-11T23:15:10.001+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:46 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"99559673-7f3e-410b-a70d-cce89103c5e6","timestampMs":1710198909988,"name":"apex-427b0bc7-703d-4d13-b440-d7b93ca39961","pdpGroup":"defaultGroup"} 23:16:46 policy-apex-pdp | [2024-03-11T23:15:10.001+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 23:16:46 policy-apex-pdp | [2024-03-11T23:15:10.001+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:46 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"d8098987-1a06-4b4b-bcac-21f90c18f0d0","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"ddde4219-3842-42f4-b1e0-15afacfe0147","timestampMs":1710198909989,"name":"apex-427b0bc7-703d-4d13-b440-d7b93ca39961","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:46 policy-apex-pdp | [2024-03-11T23:15:10.002+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 23:16:46 policy-apex-pdp | [2024-03-11T23:15:10.024+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:46 policy-apex-pdp | {"source":"pap-dd126ff4-bdda-4dfa-b8de-79b15c74268d","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"d27320f5-5a2a-4f8f-ac8b-2a07f89773ff","timestampMs":1710198909921,"name":"apex-427b0bc7-703d-4d13-b440-d7b93ca39961","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:46 policy-apex-pdp | [2024-03-11T23:15:10.027+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] 23:16:46 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"d27320f5-5a2a-4f8f-ac8b-2a07f89773ff","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"e31beda6-456f-45d4-901d-ef2340703340","timestampMs":1710198910027,"name":"apex-427b0bc7-703d-4d13-b440-d7b93ca39961","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:46 policy-apex-pdp | [2024-03-11T23:15:10.036+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:46 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"d27320f5-5a2a-4f8f-ac8b-2a07f89773ff","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"e31beda6-456f-45d4-901d-ef2340703340","timestampMs":1710198910027,"name":"apex-427b0bc7-703d-4d13-b440-d7b93ca39961","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:46 policy-apex-pdp | [2024-03-11T23:15:10.037+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 23:16:46 policy-apex-pdp | [2024-03-11T23:15:10.086+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:46 policy-apex-pdp | {"source":"pap-dd126ff4-bdda-4dfa-b8de-79b15c74268d","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"30caae45-2896-49b2-8943-7c3be2c45732","timestampMs":1710198910043,"name":"apex-427b0bc7-703d-4d13-b440-d7b93ca39961","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:46 policy-apex-pdp | [2024-03-11T23:15:10.089+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] 23:16:46 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"30caae45-2896-49b2-8943-7c3be2c45732","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"d130784a-2b9c-4a70-bb25-71f0f75a6427","timestampMs":1710198910088,"name":"apex-427b0bc7-703d-4d13-b440-d7b93ca39961","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:46 policy-apex-pdp | [2024-03-11T23:15:10.106+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:46 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"30caae45-2896-49b2-8943-7c3be2c45732","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"d130784a-2b9c-4a70-bb25-71f0f75a6427","timestampMs":1710198910088,"name":"apex-427b0bc7-703d-4d13-b440-d7b93ca39961","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:46 policy-apex-pdp | [2024-03-11T23:15:10.107+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 23:16:46 policy-apex-pdp | [2024-03-11T23:15:56.096+00:00|INFO|RequestLog|qtp1068445309-28] 172.17.0.5 - policyadmin [11/Mar/2024:23:15:56 +0000] "GET /metrics HTTP/1.1" 200 10648 "-" "Prometheus/2.50.1" 23:16:46 policy-pap | allow.auto.create.topics = true 23:16:46 policy-pap | auto.commit.interval.ms = 5000 23:16:46 policy-pap | auto.include.jmx.reporter = true 23:16:46 policy-pap | auto.offset.reset = latest 23:16:46 policy-pap | bootstrap.servers = [kafka:9092] 23:16:46 policy-pap | check.crcs = true 23:16:46 policy-pap | client.dns.lookup = use_all_dns_ips 23:16:46 policy-pap | client.id = consumer-a774452f-60f0-41d2-bd1a-6ce78860e297-1 23:16:46 policy-pap | client.rack = 23:16:46 policy-pap | connections.max.idle.ms = 540000 23:16:46 policy-pap | default.api.timeout.ms = 60000 23:16:46 policy-pap | enable.auto.commit = true 23:16:46 policy-pap | exclude.internal.topics = true 23:16:46 policy-pap | fetch.max.bytes = 52428800 23:16:46 policy-pap | fetch.max.wait.ms = 500 23:16:46 policy-pap | fetch.min.bytes = 1 23:16:46 policy-pap | group.id = a774452f-60f0-41d2-bd1a-6ce78860e297 23:16:46 policy-pap | group.instance.id = null 23:16:46 policy-pap | heartbeat.interval.ms = 3000 23:16:46 policy-pap | interceptor.classes = [] 23:16:46 policy-pap | internal.leave.group.on.close = true 23:16:46 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 23:16:46 policy-pap | isolation.level = read_uncommitted 23:16:46 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:46 policy-pap | max.partition.fetch.bytes = 1048576 23:16:46 policy-pap | max.poll.interval.ms = 300000 23:16:46 policy-pap | max.poll.records = 500 23:16:46 policy-pap | metadata.max.age.ms = 300000 23:16:46 policy-pap | metric.reporters = [] 23:16:46 policy-pap | metrics.num.samples = 2 23:16:46 policy-pap | metrics.recording.level = INFO 23:16:46 policy-pap | metrics.sample.window.ms = 30000 23:16:46 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 23:16:46 policy-pap | receive.buffer.bytes = 65536 23:16:46 policy-pap | reconnect.backoff.max.ms = 1000 23:16:46 policy-pap | reconnect.backoff.ms = 50 23:16:46 policy-pap | request.timeout.ms = 30000 23:16:46 policy-pap | retry.backoff.ms = 100 23:16:46 policy-pap | sasl.client.callback.handler.class = null 23:16:46 policy-pap | sasl.jaas.config = null 23:16:46 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:46 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 23:16:46 policy-pap | sasl.kerberos.service.name = null 23:16:46 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:46 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:46 policy-pap | sasl.login.callback.handler.class = null 23:16:46 policy-pap | sasl.login.class = null 23:16:46 policy-pap | sasl.login.connect.timeout.ms = null 23:16:46 policy-pap | sasl.login.read.timeout.ms = null 23:16:46 policy-pap | sasl.login.refresh.buffer.seconds = 300 23:16:46 policy-pap | sasl.login.refresh.min.period.seconds = 60 23:16:46 policy-pap | sasl.login.refresh.window.factor = 0.8 23:16:46 policy-pap | sasl.login.refresh.window.jitter = 0.05 23:16:46 policy-pap | sasl.login.retry.backoff.max.ms = 10000 23:16:46 policy-pap | sasl.login.retry.backoff.ms = 100 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | > upgrade 0200-jpatoscacapabilitytype_properties.sql 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilitytype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | > upgrade 0210-jpatoscadatatype_constraints.sql 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_constraints (name VARCHAR(120) NULL, version VARCHAR(20) NULL, CONSTRAINTS VARCHAR(255) NULL) 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | > upgrade 0220-jpatoscadatatype_metadata.sql 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | > upgrade 0230-jpatoscadatatype_properties.sql 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | > upgrade 0240-jpatoscanodetemplate_metadata.sql 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetemplate_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | > upgrade 0250-jpatoscanodetemplate_properties.sql 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetemplate_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:16:46 simulator | Policy simulator config file: /opt/app/policy/simulators/etc/mounted/simParameters.json 23:16:46 simulator | overriding logback.xml 23:16:46 simulator | 2024-03-11 23:14:12,070 INFO replacing 'HOST_NAME' with simulator in /opt/app/policy/simulators/etc/mounted/simParameters.json 23:16:46 simulator | 2024-03-11 23:14:12,175 INFO org.onap.policy.models.simulators starting 23:16:46 simulator | 2024-03-11 23:14:12,176 INFO org.onap.policy.models.simulators starting CDS gRPC Server Properties 23:16:46 simulator | 2024-03-11 23:14:12,384 INFO org.onap.policy.models.simulators starting org.onap.policy.simulators.AaiSimulatorJaxRs_RESOURCE_LOCATION 23:16:46 simulator | 2024-03-11 23:14:12,385 INFO org.onap.policy.models.simulators starting A&AI simulator 23:16:46 simulator | 2024-03-11 23:14:12,509 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-5c18016b==org.glassfish.jersey.servlet.ServletContainer@266aec30{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@45905bff{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@2a2c13a8{/,null,STOPPED}, connector=A&AI simulator@54a67a45{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-5c18016b==org.glassfish.jersey.servlet.ServletContainer@266aec30{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 23:16:46 simulator | 2024-03-11 23:14:12,521 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-5c18016b==org.glassfish.jersey.servlet.ServletContainer@266aec30{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@45905bff{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@2a2c13a8{/,null,STOPPED}, connector=A&AI simulator@54a67a45{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-5c18016b==org.glassfish.jersey.servlet.ServletContainer@266aec30{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:16:46 simulator | 2024-03-11 23:14:12,527 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-5c18016b==org.glassfish.jersey.servlet.ServletContainer@266aec30{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@45905bff{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@2a2c13a8{/,null,STOPPED}, connector=A&AI simulator@54a67a45{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=Thread[A&AI simulator-6666,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-5c18016b==org.glassfish.jersey.servlet.ServletContainer@266aec30{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:16:46 simulator | 2024-03-11 23:14:12,532 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.10+7-alpine-r0 23:16:46 simulator | 2024-03-11 23:14:12,608 INFO Session workerName=node0 23:16:46 simulator | 2024-03-11 23:14:13,269 INFO Using GSON for REST calls 23:16:46 simulator | 2024-03-11 23:14:13,366 INFO Started o.e.j.s.ServletContextHandler@2a2c13a8{/,null,AVAILABLE} 23:16:46 simulator | 2024-03-11 23:14:13,375 INFO Started A&AI simulator@54a67a45{HTTP/1.1, (http/1.1)}{0.0.0.0:6666} 23:16:46 simulator | 2024-03-11 23:14:13,383 INFO Started Server@45905bff{STARTING}[11.0.20,sto=0] @1894ms 23:16:46 simulator | 2024-03-11 23:14:13,383 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-5c18016b==org.glassfish.jersey.servlet.ServletContainer@266aec30{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@45905bff{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@2a2c13a8{/,null,AVAILABLE}, connector=A&AI simulator@54a67a45{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=Thread[A&AI simulator-6666,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-5c18016b==org.glassfish.jersey.servlet.ServletContainer@266aec30{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4141 ms. 23:16:46 simulator | 2024-03-11 23:14:13,395 INFO org.onap.policy.models.simulators starting SDNC simulator 23:16:46 simulator | 2024-03-11 23:14:13,400 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-7e22550a==org.glassfish.jersey.servlet.ServletContainer@615c5c2d{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@45e37a7e{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@62452cc9{/,null,STOPPED}, connector=SDNC simulator@78fbff54{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-7e22550a==org.glassfish.jersey.servlet.ServletContainer@615c5c2d{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 23:16:46 simulator | 2024-03-11 23:14:13,400 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-7e22550a==org.glassfish.jersey.servlet.ServletContainer@615c5c2d{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@45e37a7e{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@62452cc9{/,null,STOPPED}, connector=SDNC simulator@78fbff54{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-7e22550a==org.glassfish.jersey.servlet.ServletContainer@615c5c2d{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:16:46 simulator | 2024-03-11 23:14:13,402 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-7e22550a==org.glassfish.jersey.servlet.ServletContainer@615c5c2d{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@45e37a7e{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@62452cc9{/,null,STOPPED}, connector=SDNC simulator@78fbff54{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=Thread[SDNC simulator-6668,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-7e22550a==org.glassfish.jersey.servlet.ServletContainer@615c5c2d{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:16:46 simulator | 2024-03-11 23:14:13,405 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.10+7-alpine-r0 23:16:46 simulator | 2024-03-11 23:14:13,428 INFO Session workerName=node0 23:16:46 simulator | 2024-03-11 23:14:13,528 INFO Using GSON for REST calls 23:16:46 simulator | 2024-03-11 23:14:13,539 INFO Started o.e.j.s.ServletContextHandler@62452cc9{/,null,AVAILABLE} 23:16:46 simulator | 2024-03-11 23:14:13,541 INFO Started SDNC simulator@78fbff54{HTTP/1.1, (http/1.1)}{0.0.0.0:6668} 23:16:46 simulator | 2024-03-11 23:14:13,541 INFO Started Server@45e37a7e{STARTING}[11.0.20,sto=0] @2053ms 23:16:46 simulator | 2024-03-11 23:14:13,541 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-7e22550a==org.glassfish.jersey.servlet.ServletContainer@615c5c2d{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@45e37a7e{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@62452cc9{/,null,AVAILABLE}, connector=SDNC simulator@78fbff54{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=Thread[SDNC simulator-6668,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-7e22550a==org.glassfish.jersey.servlet.ServletContainer@615c5c2d{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4860 ms. 23:16:46 simulator | 2024-03-11 23:14:13,542 INFO org.onap.policy.models.simulators starting SO simulator 23:16:46 simulator | 2024-03-11 23:14:13,546 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-1021f6c9==org.glassfish.jersey.servlet.ServletContainer@fe7342f2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@7516e4e5{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@488eb7f2{/,null,STOPPED}, connector=SO simulator@5a7005d{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-1021f6c9==org.glassfish.jersey.servlet.ServletContainer@fe7342f2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 23:16:46 simulator | 2024-03-11 23:14:13,547 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-1021f6c9==org.glassfish.jersey.servlet.ServletContainer@fe7342f2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@7516e4e5{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@488eb7f2{/,null,STOPPED}, connector=SO simulator@5a7005d{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-1021f6c9==org.glassfish.jersey.servlet.ServletContainer@fe7342f2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:16:46 simulator | 2024-03-11 23:14:13,548 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-1021f6c9==org.glassfish.jersey.servlet.ServletContainer@fe7342f2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@7516e4e5{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@488eb7f2{/,null,STOPPED}, connector=SO simulator@5a7005d{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=Thread[SO simulator-6669,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-1021f6c9==org.glassfish.jersey.servlet.ServletContainer@fe7342f2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:16:46 simulator | 2024-03-11 23:14:13,549 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.10+7-alpine-r0 23:16:46 simulator | 2024-03-11 23:14:13,552 INFO Session workerName=node0 23:16:46 simulator | 2024-03-11 23:14:13,605 INFO Using GSON for REST calls 23:16:46 simulator | 2024-03-11 23:14:13,617 INFO Started o.e.j.s.ServletContextHandler@488eb7f2{/,null,AVAILABLE} 23:16:46 simulator | 2024-03-11 23:14:13,619 INFO Started SO simulator@5a7005d{HTTP/1.1, (http/1.1)}{0.0.0.0:6669} 23:16:46 simulator | 2024-03-11 23:14:13,619 INFO Started Server@7516e4e5{STARTING}[11.0.20,sto=0] @2130ms 23:16:46 simulator | 2024-03-11 23:14:13,619 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-1021f6c9==org.glassfish.jersey.servlet.ServletContainer@fe7342f2{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@7516e4e5{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@488eb7f2{/,null,AVAILABLE}, connector=SO simulator@5a7005d{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=Thread[SO simulator-6669,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-1021f6c9==org.glassfish.jersey.servlet.ServletContainer@fe7342f2{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4929 ms. 23:16:46 simulator | 2024-03-11 23:14:13,620 INFO org.onap.policy.models.simulators starting VFC simulator 23:16:46 simulator | 2024-03-11 23:14:13,624 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-3e7634b9==org.glassfish.jersey.servlet.ServletContainer@ca975b37{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@6f0b0a5e{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@6035b93b{/,null,STOPPED}, connector=VFC simulator@4189d70b{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-3e7634b9==org.glassfish.jersey.servlet.ServletContainer@ca975b37{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 23:16:46 simulator | 2024-03-11 23:14:13,624 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-3e7634b9==org.glassfish.jersey.servlet.ServletContainer@ca975b37{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@6f0b0a5e{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@6035b93b{/,null,STOPPED}, connector=VFC simulator@4189d70b{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-3e7634b9==org.glassfish.jersey.servlet.ServletContainer@ca975b37{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:16:46 simulator | 2024-03-11 23:14:13,625 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-3e7634b9==org.glassfish.jersey.servlet.ServletContainer@ca975b37{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@6f0b0a5e{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@6035b93b{/,null,STOPPED}, connector=VFC simulator@4189d70b{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=Thread[VFC simulator-6670,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-3e7634b9==org.glassfish.jersey.servlet.ServletContainer@ca975b37{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:16:46 simulator | 2024-03-11 23:14:13,626 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.10+7-alpine-r0 23:16:46 simulator | 2024-03-11 23:14:13,636 INFO Session workerName=node0 23:16:46 simulator | 2024-03-11 23:14:13,696 INFO Using GSON for REST calls 23:16:46 simulator | 2024-03-11 23:14:13,705 INFO Started o.e.j.s.ServletContextHandler@6035b93b{/,null,AVAILABLE} 23:16:46 simulator | 2024-03-11 23:14:13,706 INFO Started VFC simulator@4189d70b{HTTP/1.1, (http/1.1)}{0.0.0.0:6670} 23:16:46 simulator | 2024-03-11 23:14:13,706 INFO Started Server@6f0b0a5e{STARTING}[11.0.20,sto=0] @2218ms 23:16:46 policy-pap | sasl.mechanism = GSSAPI 23:16:46 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 23:16:46 policy-pap | sasl.oauthbearer.expected.audience = null 23:16:46 policy-pap | sasl.oauthbearer.expected.issuer = null 23:16:46 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:46 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:46 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:46 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 23:16:46 policy-pap | sasl.oauthbearer.scope.claim.name = scope 23:16:46 policy-pap | sasl.oauthbearer.sub.claim.name = sub 23:16:46 policy-pap | sasl.oauthbearer.token.endpoint.url = null 23:16:46 policy-pap | security.protocol = PLAINTEXT 23:16:46 policy-pap | security.providers = null 23:16:46 policy-pap | send.buffer.bytes = 131072 23:16:46 policy-pap | session.timeout.ms = 45000 23:16:46 policy-pap | socket.connection.setup.timeout.max.ms = 30000 23:16:46 policy-pap | socket.connection.setup.timeout.ms = 10000 23:16:46 policy-pap | ssl.cipher.suites = null 23:16:46 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:46 policy-pap | ssl.endpoint.identification.algorithm = https 23:16:46 policy-pap | ssl.engine.factory.class = null 23:16:46 policy-pap | ssl.key.password = null 23:16:46 policy-pap | ssl.keymanager.algorithm = SunX509 23:16:46 policy-pap | ssl.keystore.certificate.chain = null 23:16:46 policy-pap | ssl.keystore.key = null 23:16:46 policy-pap | ssl.keystore.location = null 23:16:46 policy-pap | ssl.keystore.password = null 23:16:46 policy-pap | ssl.keystore.type = JKS 23:16:46 policy-pap | ssl.protocol = TLSv1.3 23:16:46 policy-pap | ssl.provider = null 23:16:46 policy-pap | ssl.secure.random.implementation = null 23:16:46 policy-pap | ssl.trustmanager.algorithm = PKIX 23:16:46 policy-pap | ssl.truststore.certificates = null 23:16:46 policy-pap | ssl.truststore.location = null 23:16:46 policy-pap | ssl.truststore.password = null 23:16:46 policy-pap | ssl.truststore.type = JKS 23:16:46 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:46 policy-pap | 23:16:46 policy-pap | [2024-03-11T23:14:46.064+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 23:16:46 policy-pap | [2024-03-11T23:14:46.065+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 23:16:46 policy-pap | [2024-03-11T23:14:46.065+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1710198886063 23:16:46 policy-pap | [2024-03-11T23:14:46.067+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-a774452f-60f0-41d2-bd1a-6ce78860e297-1, groupId=a774452f-60f0-41d2-bd1a-6ce78860e297] Subscribed to topic(s): policy-pdp-pap 23:16:46 policy-pap | [2024-03-11T23:14:46.068+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 23:16:46 policy-pap | allow.auto.create.topics = true 23:16:46 policy-pap | auto.commit.interval.ms = 5000 23:16:46 policy-pap | auto.include.jmx.reporter = true 23:16:46 policy-pap | auto.offset.reset = latest 23:16:46 policy-pap | bootstrap.servers = [kafka:9092] 23:16:46 policy-pap | check.crcs = true 23:16:46 policy-pap | client.dns.lookup = use_all_dns_ips 23:16:46 policy-pap | client.id = consumer-policy-pap-2 23:16:46 policy-pap | client.rack = 23:16:46 policy-pap | connections.max.idle.ms = 540000 23:16:46 policy-pap | default.api.timeout.ms = 60000 23:16:46 policy-pap | enable.auto.commit = true 23:16:46 policy-pap | exclude.internal.topics = true 23:16:46 policy-pap | fetch.max.bytes = 52428800 23:16:46 policy-pap | fetch.max.wait.ms = 500 23:16:46 policy-pap | fetch.min.bytes = 1 23:16:46 policy-pap | group.id = policy-pap 23:16:46 policy-pap | group.instance.id = null 23:16:46 policy-pap | heartbeat.interval.ms = 3000 23:16:46 policy-pap | interceptor.classes = [] 23:16:46 policy-pap | internal.leave.group.on.close = true 23:16:46 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 23:16:46 policy-pap | isolation.level = read_uncommitted 23:16:46 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:46 policy-pap | max.partition.fetch.bytes = 1048576 23:16:46 policy-pap | max.poll.interval.ms = 300000 23:16:46 policy-pap | max.poll.records = 500 23:16:46 policy-pap | metadata.max.age.ms = 300000 23:16:46 policy-pap | metric.reporters = [] 23:16:46 simulator | 2024-03-11 23:14:13,707 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-3e7634b9==org.glassfish.jersey.servlet.ServletContainer@ca975b37{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@6f0b0a5e{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@6035b93b{/,null,AVAILABLE}, connector=VFC simulator@4189d70b{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=Thread[VFC simulator-6670,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-3e7634b9==org.glassfish.jersey.servlet.ServletContainer@ca975b37{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4918 ms. 23:16:46 simulator | 2024-03-11 23:14:13,708 INFO org.onap.policy.models.simulators started 23:16:46 policy-pap | metrics.num.samples = 2 23:16:46 policy-pap | metrics.recording.level = INFO 23:16:46 policy-pap | metrics.sample.window.ms = 30000 23:16:46 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 23:16:46 policy-pap | receive.buffer.bytes = 65536 23:16:46 policy-pap | reconnect.backoff.max.ms = 1000 23:16:46 policy-pap | reconnect.backoff.ms = 50 23:16:46 policy-pap | request.timeout.ms = 30000 23:16:46 policy-pap | retry.backoff.ms = 100 23:16:46 policy-pap | sasl.client.callback.handler.class = null 23:16:46 policy-pap | sasl.jaas.config = null 23:16:46 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:46 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 23:16:46 policy-pap | sasl.kerberos.service.name = null 23:16:46 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:46 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:46 policy-pap | sasl.login.callback.handler.class = null 23:16:46 policy-pap | sasl.login.class = null 23:16:46 policy-pap | sasl.login.connect.timeout.ms = null 23:16:46 policy-pap | sasl.login.read.timeout.ms = null 23:16:46 policy-pap | sasl.login.refresh.buffer.seconds = 300 23:16:46 policy-pap | sasl.login.refresh.min.period.seconds = 60 23:16:46 policy-pap | sasl.login.refresh.window.factor = 0.8 23:16:46 policy-pap | sasl.login.refresh.window.jitter = 0.05 23:16:46 policy-pap | sasl.login.retry.backoff.max.ms = 10000 23:16:46 policy-pap | sasl.login.retry.backoff.ms = 100 23:16:46 policy-pap | sasl.mechanism = GSSAPI 23:16:46 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 23:16:46 policy-pap | sasl.oauthbearer.expected.audience = null 23:16:46 policy-pap | sasl.oauthbearer.expected.issuer = null 23:16:46 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:46 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:46 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:46 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 23:16:46 policy-pap | sasl.oauthbearer.scope.claim.name = scope 23:16:46 policy-pap | sasl.oauthbearer.sub.claim.name = sub 23:16:46 policy-pap | sasl.oauthbearer.token.endpoint.url = null 23:16:46 policy-pap | security.protocol = PLAINTEXT 23:16:46 policy-pap | security.providers = null 23:16:46 policy-pap | send.buffer.bytes = 131072 23:16:46 policy-pap | session.timeout.ms = 45000 23:16:46 policy-pap | socket.connection.setup.timeout.max.ms = 30000 23:16:46 policy-pap | socket.connection.setup.timeout.ms = 10000 23:16:46 policy-pap | ssl.cipher.suites = null 23:16:46 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:46 policy-pap | ssl.endpoint.identification.algorithm = https 23:16:46 policy-pap | ssl.engine.factory.class = null 23:16:46 policy-pap | ssl.key.password = null 23:16:46 policy-pap | ssl.keymanager.algorithm = SunX509 23:16:46 policy-pap | ssl.keystore.certificate.chain = null 23:16:46 policy-pap | ssl.keystore.key = null 23:16:46 policy-pap | ssl.keystore.location = null 23:16:46 policy-pap | ssl.keystore.password = null 23:16:46 policy-pap | ssl.keystore.type = JKS 23:16:46 policy-pap | ssl.protocol = TLSv1.3 23:16:46 policy-pap | ssl.provider = null 23:16:46 policy-pap | ssl.secure.random.implementation = null 23:16:46 policy-pap | ssl.trustmanager.algorithm = PKIX 23:16:46 policy-pap | ssl.truststore.certificates = null 23:16:46 policy-pap | ssl.truststore.location = null 23:16:46 policy-pap | ssl.truststore.password = null 23:16:46 policy-pap | ssl.truststore.type = JKS 23:16:46 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:46 policy-pap | 23:16:46 policy-pap | [2024-03-11T23:14:46.074+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 23:16:46 policy-pap | [2024-03-11T23:14:46.074+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 23:16:46 policy-pap | [2024-03-11T23:14:46.074+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1710198886074 23:16:46 policy-pap | [2024-03-11T23:14:46.074+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-policy-pap-2, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | > upgrade 0260-jpatoscanodetype_metadata.sql 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | > upgrade 0270-jpatoscanodetype_properties.sql 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | > upgrade 0280-jpatoscapolicy_metadata.sql 23:16:46 policy-pap | [2024-03-11T23:14:46.408+00:00|INFO|PapDatabaseInitializer|main] Created initial pdpGroup in DB - PdpGroups(groups=[PdpGroup(name=defaultGroup, description=The default group that registers all supported policy types and pdps., pdpGroupState=ACTIVE, properties=null, pdpSubgroups=[PdpSubGroup(pdpType=apex, supportedPolicyTypes=[onap.policies.controlloop.operational.common.Apex 1.0.0, onap.policies.native.Apex 1.0.0], policies=[], currentInstanceCount=0, desiredInstanceCount=1, properties=null, pdpInstances=null)])]) from /opt/app/policy/pap/etc/mounted/groups.json 23:16:46 policy-pap | [2024-03-11T23:14:46.565+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning 23:16:46 policy-pap | [2024-03-11T23:14:46.811+00:00|INFO|DefaultSecurityFilterChain|main] Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@71d2261e, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@53917c92, org.springframework.security.web.context.SecurityContextHolderFilter@7c359808, org.springframework.security.web.header.HeaderWriterFilter@52963839, org.springframework.security.web.authentication.logout.LogoutFilter@6787bd41, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@39420d59, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@16361e61, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@1734b1a, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@1fa796a4, org.springframework.security.web.access.ExceptionTranslationFilter@7ce4498f, org.springframework.security.web.access.intercept.AuthorizationFilter@f287a4e] 23:16:46 policy-pap | [2024-03-11T23:14:47.602+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoint(s) beneath base path '' 23:16:46 policy-pap | [2024-03-11T23:14:47.703+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] 23:16:46 policy-pap | [2024-03-11T23:14:47.738+00:00|INFO|TomcatWebServer|main] Tomcat started on port(s): 6969 (http) with context path '/policy/pap/v1' 23:16:46 policy-pap | [2024-03-11T23:14:47.759+00:00|INFO|ServiceManager|main] Policy PAP starting 23:16:46 policy-pap | [2024-03-11T23:14:47.760+00:00|INFO|ServiceManager|main] Policy PAP starting Meter Registry 23:16:46 policy-pap | [2024-03-11T23:14:47.761+00:00|INFO|ServiceManager|main] Policy PAP starting PAP parameters 23:16:46 policy-pap | [2024-03-11T23:14:47.762+00:00|INFO|ServiceManager|main] Policy PAP starting Pdp Heartbeat Listener 23:16:46 policy-pap | [2024-03-11T23:14:47.762+00:00|INFO|ServiceManager|main] Policy PAP starting Response Request ID Dispatcher 23:16:46 policy-pap | [2024-03-11T23:14:47.763+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Request ID Dispatcher 23:16:46 policy-pap | [2024-03-11T23:14:47.763+00:00|INFO|ServiceManager|main] Policy PAP starting Response Message Dispatcher 23:16:46 policy-pap | [2024-03-11T23:14:47.767+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=a774452f-60f0-41d2-bd1a-6ce78860e297, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@2a525f88 23:16:46 policy-pap | [2024-03-11T23:14:47.777+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=a774452f-60f0-41d2-bd1a-6ce78860e297, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 23:16:46 policy-pap | [2024-03-11T23:14:47.778+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 23:16:46 policy-pap | allow.auto.create.topics = true 23:16:46 policy-pap | auto.commit.interval.ms = 5000 23:16:46 policy-pap | auto.include.jmx.reporter = true 23:16:46 policy-pap | auto.offset.reset = latest 23:16:46 policy-pap | bootstrap.servers = [kafka:9092] 23:16:46 policy-pap | check.crcs = true 23:16:46 policy-pap | client.dns.lookup = use_all_dns_ips 23:16:46 policy-pap | client.id = consumer-a774452f-60f0-41d2-bd1a-6ce78860e297-3 23:16:46 policy-pap | client.rack = 23:16:46 policy-pap | connections.max.idle.ms = 540000 23:16:46 policy-pap | default.api.timeout.ms = 60000 23:16:46 policy-pap | enable.auto.commit = true 23:16:46 policy-pap | exclude.internal.topics = true 23:16:46 policy-pap | fetch.max.bytes = 52428800 23:16:46 policy-pap | fetch.max.wait.ms = 500 23:16:46 policy-pap | fetch.min.bytes = 1 23:16:46 policy-pap | group.id = a774452f-60f0-41d2-bd1a-6ce78860e297 23:16:46 policy-pap | group.instance.id = null 23:16:46 policy-pap | heartbeat.interval.ms = 3000 23:16:46 policy-pap | interceptor.classes = [] 23:16:46 policy-pap | internal.leave.group.on.close = true 23:16:46 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 23:16:46 policy-pap | isolation.level = read_uncommitted 23:16:46 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:46 policy-pap | max.partition.fetch.bytes = 1048576 23:16:46 policy-pap | max.poll.interval.ms = 300000 23:16:46 policy-pap | max.poll.records = 500 23:16:46 policy-pap | metadata.max.age.ms = 300000 23:16:46 policy-pap | metric.reporters = [] 23:16:46 policy-pap | metrics.num.samples = 2 23:16:46 policy-pap | metrics.recording.level = INFO 23:16:46 policy-pap | metrics.sample.window.ms = 30000 23:16:46 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 23:16:46 policy-pap | receive.buffer.bytes = 65536 23:16:46 policy-pap | reconnect.backoff.max.ms = 1000 23:16:46 policy-pap | reconnect.backoff.ms = 50 23:16:46 policy-pap | request.timeout.ms = 30000 23:16:46 policy-pap | retry.backoff.ms = 100 23:16:46 policy-pap | sasl.client.callback.handler.class = null 23:16:46 policy-pap | sasl.jaas.config = null 23:16:46 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:46 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 23:16:46 policy-pap | sasl.kerberos.service.name = null 23:16:46 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:46 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:46 policy-pap | sasl.login.callback.handler.class = null 23:16:46 policy-pap | sasl.login.class = null 23:16:46 policy-pap | sasl.login.connect.timeout.ms = null 23:16:46 policy-pap | sasl.login.read.timeout.ms = null 23:16:46 policy-pap | sasl.login.refresh.buffer.seconds = 300 23:16:46 policy-pap | sasl.login.refresh.min.period.seconds = 60 23:16:46 policy-pap | sasl.login.refresh.window.factor = 0.8 23:16:46 policy-pap | sasl.login.refresh.window.jitter = 0.05 23:16:46 policy-pap | sasl.login.retry.backoff.max.ms = 10000 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | > upgrade 0290-jpatoscapolicy_properties.sql 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | > upgrade 0300-jpatoscapolicy_targets.sql 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_targets (name VARCHAR(120) NULL, version VARCHAR(20) NULL) 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | > upgrade 0310-jpatoscapolicytype_metadata.sql 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | > upgrade 0320-jpatoscapolicytype_properties.sql 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | > upgrade 0330-jpatoscapolicytype_targets.sql 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_targets (name VARCHAR(120) NULL, version VARCHAR(20) NULL) 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | > upgrade 0340-jpatoscapolicytype_triggers.sql 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_triggers (name VARCHAR(120) NULL, version VARCHAR(20) NULL, TRIGGERS VARCHAR(255) NULL) 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | > upgrade 0350-jpatoscaproperty_constraints.sql 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaproperty_constraints (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, CONSTRAINTS VARCHAR(255) NULL) 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | > upgrade 0360-jpatoscaproperty_metadata.sql 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaproperty_metadata (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:46 policy-pap | sasl.login.retry.backoff.ms = 100 23:16:46 policy-pap | sasl.mechanism = GSSAPI 23:16:46 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 23:16:46 policy-pap | sasl.oauthbearer.expected.audience = null 23:16:46 policy-pap | sasl.oauthbearer.expected.issuer = null 23:16:46 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:46 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:46 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:46 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 23:16:46 policy-pap | sasl.oauthbearer.scope.claim.name = scope 23:16:46 policy-pap | sasl.oauthbearer.sub.claim.name = sub 23:16:46 policy-pap | sasl.oauthbearer.token.endpoint.url = null 23:16:46 policy-pap | security.protocol = PLAINTEXT 23:16:46 policy-pap | security.providers = null 23:16:46 policy-pap | send.buffer.bytes = 131072 23:16:46 policy-pap | session.timeout.ms = 45000 23:16:46 policy-pap | socket.connection.setup.timeout.max.ms = 30000 23:16:46 policy-pap | socket.connection.setup.timeout.ms = 10000 23:16:46 policy-pap | ssl.cipher.suites = null 23:16:46 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:46 policy-pap | ssl.endpoint.identification.algorithm = https 23:16:46 policy-pap | ssl.engine.factory.class = null 23:16:46 policy-pap | ssl.key.password = null 23:16:46 policy-pap | ssl.keymanager.algorithm = SunX509 23:16:46 policy-pap | ssl.keystore.certificate.chain = null 23:16:46 policy-pap | ssl.keystore.key = null 23:16:46 policy-pap | ssl.keystore.location = null 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.150036811Z level=info msg="Executing migration" id="create tag table" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.151383555Z level=info msg="Migration successfully executed" id="create tag table" duration=1.345724ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.157281246Z level=info msg="Executing migration" id="add index tag.key_value" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.158250541Z level=info msg="Migration successfully executed" id="add index tag.key_value" duration=968.505µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.16406916Z level=info msg="Executing migration" id="create login attempt table" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.165414433Z level=info msg="Migration successfully executed" id="create login attempt table" duration=1.347514ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.169217181Z level=info msg="Executing migration" id="add index login_attempt.username" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.170842672Z level=info msg="Migration successfully executed" id="add index login_attempt.username" duration=1.625071ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.178087608Z level=info msg="Executing migration" id="drop index IDX_login_attempt_username - v1" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.179067632Z level=info msg="Migration successfully executed" id="drop index IDX_login_attempt_username - v1" duration=982.045µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.182822768Z level=info msg="Executing migration" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.200143371Z level=info msg="Migration successfully executed" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" duration=17.320713ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.204487531Z level=info msg="Executing migration" id="create login_attempt v2" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.205182319Z level=info msg="Migration successfully executed" id="create login_attempt v2" duration=694.027µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.210588007Z level=info msg="Executing migration" id="create index IDX_login_attempt_username - v2" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.212110496Z level=info msg="Migration successfully executed" id="create index IDX_login_attempt_username - v2" duration=1.52331ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.215913093Z level=info msg="Executing migration" id="copy login_attempt v1 to v2" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.216530498Z level=info msg="Migration successfully executed" id="copy login_attempt v1 to v2" duration=616.265µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.219755252Z level=info msg="Executing migration" id="drop login_attempt_tmp_qwerty" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.22048293Z level=info msg="Migration successfully executed" id="drop login_attempt_tmp_qwerty" duration=727.088µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.226609026Z level=info msg="Executing migration" id="create user auth table" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.22755771Z level=info msg="Migration successfully executed" id="create user auth table" duration=946.154µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.231022689Z level=info msg="Executing migration" id="create index IDX_user_auth_auth_module_auth_id - v1" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.232822854Z level=info msg="Migration successfully executed" id="create index IDX_user_auth_auth_module_auth_id - v1" duration=1.798395ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.238159851Z level=info msg="Executing migration" id="alter user_auth.auth_id to length 190" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.238323375Z level=info msg="Migration successfully executed" id="alter user_auth.auth_id to length 190" duration=163.074µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.2447768Z level=info msg="Executing migration" id="Add OAuth access token to user_auth" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.253459432Z level=info msg="Migration successfully executed" id="Add OAuth access token to user_auth" duration=8.680432ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.260169443Z level=info msg="Executing migration" id="Add OAuth refresh token to user_auth" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.263915378Z level=info msg="Migration successfully executed" id="Add OAuth refresh token to user_auth" duration=3.741895ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.267236114Z level=info msg="Executing migration" id="Add OAuth token type to user_auth" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.272522228Z level=info msg="Migration successfully executed" id="Add OAuth token type to user_auth" duration=5.284704ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.278881111Z level=info msg="Executing migration" id="Add OAuth expiry to user_auth" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.286574177Z level=info msg="Migration successfully executed" id="Add OAuth expiry to user_auth" duration=7.691106ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.292096918Z level=info msg="Executing migration" id="Add index to user_id column in user_auth" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.292862318Z level=info msg="Migration successfully executed" id="Add index to user_id column in user_auth" duration=765µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.29606815Z level=info msg="Executing migration" id="Add OAuth ID token to user_auth" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.304126976Z level=info msg="Migration successfully executed" id="Add OAuth ID token to user_auth" duration=8.055306ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.308168758Z level=info msg="Executing migration" id="create server_lock table" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.309572225Z level=info msg="Migration successfully executed" id="create server_lock table" duration=1.402796ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.314379347Z level=info msg="Executing migration" id="add index server_lock.operation_uid" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.315441215Z level=info msg="Migration successfully executed" id="add index server_lock.operation_uid" duration=1.060958ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.323152672Z level=info msg="Executing migration" id="create user auth token table" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.324594848Z level=info msg="Migration successfully executed" id="create user auth token table" duration=1.441576ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.331652358Z level=info msg="Executing migration" id="add unique index user_auth_token.auth_token" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.333413053Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.auth_token" duration=1.760495ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.337547019Z level=info msg="Executing migration" id="add unique index user_auth_token.prev_auth_token" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.33878351Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.prev_auth_token" duration=1.236971ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.342642278Z level=info msg="Executing migration" id="add index user_auth_token.user_id" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.343714606Z level=info msg="Migration successfully executed" id="add index user_auth_token.user_id" duration=1.072158ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.348765915Z level=info msg="Executing migration" id="Add revoked_at to the user auth token" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.355239781Z level=info msg="Migration successfully executed" id="Add revoked_at to the user auth token" duration=6.472885ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.363098261Z level=info msg="Executing migration" id="add index user_auth_token.revoked_at" 23:16:46 kafka | num.partitions = 1 23:16:46 kafka | num.recovery.threads.per.data.dir = 1 23:16:46 kafka | num.replica.alter.log.dirs.threads = null 23:16:46 kafka | num.replica.fetchers = 1 23:16:46 kafka | offset.metadata.max.bytes = 4096 23:16:46 kafka | offsets.commit.required.acks = -1 23:16:46 kafka | offsets.commit.timeout.ms = 5000 23:16:46 kafka | offsets.load.buffer.size = 5242880 23:16:46 kafka | offsets.retention.check.interval.ms = 600000 23:16:46 kafka | offsets.retention.minutes = 10080 23:16:46 kafka | offsets.topic.compression.codec = 0 23:16:46 kafka | offsets.topic.num.partitions = 50 23:16:46 kafka | offsets.topic.replication.factor = 1 23:16:46 kafka | offsets.topic.segment.bytes = 104857600 23:16:46 kafka | password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding 23:16:46 kafka | password.encoder.iterations = 4096 23:16:46 kafka | password.encoder.key.length = 128 23:16:46 kafka | password.encoder.keyfactory.algorithm = null 23:16:46 kafka | password.encoder.old.secret = null 23:16:46 kafka | password.encoder.secret = null 23:16:46 kafka | principal.builder.class = class org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder 23:16:46 kafka | process.roles = [] 23:16:46 kafka | producer.id.expiration.check.interval.ms = 600000 23:16:46 kafka | producer.id.expiration.ms = 86400000 23:16:46 kafka | producer.purgatory.purge.interval.requests = 1000 23:16:46 kafka | queued.max.request.bytes = -1 23:16:46 kafka | queued.max.requests = 500 23:16:46 kafka | quota.window.num = 11 23:16:46 kafka | quota.window.size.seconds = 1 23:16:46 kafka | remote.log.index.file.cache.total.size.bytes = 1073741824 23:16:46 kafka | remote.log.manager.task.interval.ms = 30000 23:16:46 kafka | remote.log.manager.task.retry.backoff.max.ms = 30000 23:16:46 kafka | remote.log.manager.task.retry.backoff.ms = 500 23:16:46 kafka | remote.log.manager.task.retry.jitter = 0.2 23:16:46 kafka | remote.log.manager.thread.pool.size = 10 23:16:46 kafka | remote.log.metadata.custom.metadata.max.bytes = 128 23:16:46 kafka | remote.log.metadata.manager.class.name = org.apache.kafka.server.log.remote.metadata.storage.TopicBasedRemoteLogMetadataManager 23:16:46 kafka | remote.log.metadata.manager.class.path = null 23:16:46 kafka | remote.log.metadata.manager.impl.prefix = rlmm.config. 23:16:46 kafka | remote.log.metadata.manager.listener.name = null 23:16:46 kafka | remote.log.reader.max.pending.tasks = 100 23:16:46 kafka | remote.log.reader.threads = 10 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.364144658Z level=info msg="Migration successfully executed" id="add index user_auth_token.revoked_at" duration=1.045397ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.369443573Z level=info msg="Executing migration" id="create cache_data table" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.371073795Z level=info msg="Migration successfully executed" id="create cache_data table" duration=1.629892ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.377373726Z level=info msg="Executing migration" id="add unique index cache_data.cache_key" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.378376391Z level=info msg="Migration successfully executed" id="add unique index cache_data.cache_key" duration=1.001285ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.382421455Z level=info msg="Executing migration" id="create short_url table v1" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.383458241Z level=info msg="Migration successfully executed" id="create short_url table v1" duration=1.030076ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.388571342Z level=info msg="Executing migration" id="add index short_url.org_id-uid" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.390390878Z level=info msg="Migration successfully executed" id="add index short_url.org_id-uid" duration=1.818667ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.39790555Z level=info msg="Executing migration" id="alter table short_url alter column created_by type to bigint" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.398136726Z level=info msg="Migration successfully executed" id="alter table short_url alter column created_by type to bigint" duration=230.446µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.404895599Z level=info msg="Executing migration" id="delete alert_definition table" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.405068273Z level=info msg="Migration successfully executed" id="delete alert_definition table" duration=171.714µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.408698835Z level=info msg="Executing migration" id="recreate alert_definition table" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.410388739Z level=info msg="Migration successfully executed" id="recreate alert_definition table" duration=1.688454ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.41475259Z level=info msg="Executing migration" id="add index in alert_definition on org_id and title columns" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.416923576Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and title columns" duration=2.171396ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.421697958Z level=info msg="Executing migration" id="add index in alert_definition on org_id and uid columns" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.422756664Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and uid columns" duration=1.058726ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.426597632Z level=info msg="Executing migration" id="alter alert_definition table data column to mediumtext in mysql" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.426706165Z level=info msg="Migration successfully executed" id="alter alert_definition table data column to mediumtext in mysql" duration=107.933µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.430437831Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and title columns" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.431483627Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and title columns" duration=1.045897ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.437404178Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and uid columns" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.43902079Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and uid columns" duration=1.615142ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.44528722Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and title columns" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.447085146Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and title columns" duration=1.800107ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.451058887Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and uid columns" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.452244158Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and uid columns" duration=1.18573ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.457782129Z level=info msg="Executing migration" id="Add column paused in alert_definition" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.463456384Z level=info msg="Migration successfully executed" id="Add column paused in alert_definition" duration=5.673605ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.467321902Z level=info msg="Executing migration" id="drop alert_definition table" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.46844237Z level=info msg="Migration successfully executed" id="drop alert_definition table" duration=1.116338ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.47432473Z level=info msg="Executing migration" id="delete alert_definition_version table" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.474489726Z level=info msg="Migration successfully executed" id="delete alert_definition_version table" duration=164.225µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.479569645Z level=info msg="Executing migration" id="recreate alert_definition_version table" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.481204337Z level=info msg="Migration successfully executed" id="recreate alert_definition_version table" duration=1.634982ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.522685616Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_id and version columns" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.52441281Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_id and version columns" duration=1.722813ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.530091984Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_uid and version columns" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.531814599Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_uid and version columns" duration=1.721545ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.53615256Z level=info msg="Executing migration" id="alter alert_definition_version table data column to mediumtext in mysql" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.536279253Z level=info msg="Migration successfully executed" id="alter alert_definition_version table data column to mediumtext in mysql" duration=120.353µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.540172733Z level=info msg="Executing migration" id="drop alert_definition_version table" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.541216249Z level=info msg="Migration successfully executed" id="drop alert_definition_version table" duration=1.043247ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.548815663Z level=info msg="Executing migration" id="create alert_instance table" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.550015553Z level=info msg="Migration successfully executed" id="create alert_instance table" duration=1.19948ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.55652247Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" 23:16:46 kafka | remote.log.storage.manager.class.name = null 23:16:46 kafka | remote.log.storage.manager.class.path = null 23:16:46 kafka | remote.log.storage.manager.impl.prefix = rsm.config. 23:16:46 kafka | remote.log.storage.system.enable = false 23:16:46 kafka | replica.fetch.backoff.ms = 1000 23:16:46 kafka | replica.fetch.max.bytes = 1048576 23:16:46 kafka | replica.fetch.min.bytes = 1 23:16:46 kafka | replica.fetch.response.max.bytes = 10485760 23:16:46 kafka | replica.fetch.wait.max.ms = 500 23:16:46 kafka | replica.high.watermark.checkpoint.interval.ms = 5000 23:16:46 kafka | replica.lag.time.max.ms = 30000 23:16:46 kafka | replica.selector.class = null 23:16:46 kafka | replica.socket.receive.buffer.bytes = 65536 23:16:46 kafka | replica.socket.timeout.ms = 30000 23:16:46 kafka | replication.quota.window.num = 11 23:16:46 kafka | replication.quota.window.size.seconds = 1 23:16:46 kafka | request.timeout.ms = 30000 23:16:46 kafka | reserved.broker.max.id = 1000 23:16:46 kafka | sasl.client.callback.handler.class = null 23:16:46 kafka | sasl.enabled.mechanisms = [GSSAPI] 23:16:46 kafka | sasl.jaas.config = null 23:16:46 kafka | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:46 kafka | sasl.kerberos.min.time.before.relogin = 60000 23:16:46 kafka | sasl.kerberos.principal.to.local.rules = [DEFAULT] 23:16:46 kafka | sasl.kerberos.service.name = null 23:16:46 kafka | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:46 kafka | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:46 kafka | sasl.login.callback.handler.class = null 23:16:46 kafka | sasl.login.class = null 23:16:46 kafka | sasl.login.connect.timeout.ms = null 23:16:46 kafka | sasl.login.read.timeout.ms = null 23:16:46 kafka | sasl.login.refresh.buffer.seconds = 300 23:16:46 kafka | sasl.login.refresh.min.period.seconds = 60 23:16:46 kafka | sasl.login.refresh.window.factor = 0.8 23:16:46 kafka | sasl.login.refresh.window.jitter = 0.05 23:16:46 kafka | sasl.login.retry.backoff.max.ms = 10000 23:16:46 kafka | sasl.login.retry.backoff.ms = 100 23:16:46 kafka | sasl.mechanism.controller.protocol = GSSAPI 23:16:46 kafka | sasl.mechanism.inter.broker.protocol = GSSAPI 23:16:46 kafka | sasl.oauthbearer.clock.skew.seconds = 30 23:16:46 kafka | sasl.oauthbearer.expected.audience = null 23:16:46 kafka | sasl.oauthbearer.expected.issuer = null 23:16:46 kafka | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:46 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:46 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:46 kafka | sasl.oauthbearer.jwks.endpoint.url = null 23:16:46 kafka | sasl.oauthbearer.scope.claim.name = scope 23:16:46 kafka | sasl.oauthbearer.sub.claim.name = sub 23:16:46 kafka | sasl.oauthbearer.token.endpoint.url = null 23:16:46 kafka | sasl.server.callback.handler.class = null 23:16:46 kafka | sasl.server.max.receive.size = 524288 23:16:46 kafka | security.inter.broker.protocol = PLAINTEXT 23:16:46 kafka | security.providers = null 23:16:46 kafka | server.max.startup.time.ms = 9223372036854775807 23:16:46 kafka | socket.connection.setup.timeout.max.ms = 30000 23:16:46 kafka | socket.connection.setup.timeout.ms = 10000 23:16:46 kafka | socket.listen.backlog.size = 50 23:16:46 kafka | socket.receive.buffer.bytes = 102400 23:16:46 kafka | socket.request.max.bytes = 104857600 23:16:46 kafka | socket.send.buffer.bytes = 102400 23:16:46 kafka | ssl.cipher.suites = [] 23:16:46 kafka | ssl.client.auth = none 23:16:46 kafka | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:46 kafka | ssl.endpoint.identification.algorithm = https 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.558304395Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" duration=1.780575ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.563339554Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, current_state columns" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.564524295Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, current_state columns" duration=1.18306ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.568632959Z level=info msg="Executing migration" id="add column current_state_end to alert_instance" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.577203228Z level=info msg="Migration successfully executed" id="add column current_state_end to alert_instance" duration=8.571138ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.580786149Z level=info msg="Executing migration" id="remove index def_org_id, def_uid, current_state on alert_instance" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.581537039Z level=info msg="Migration successfully executed" id="remove index def_org_id, def_uid, current_state on alert_instance" duration=749.43µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.586379172Z level=info msg="Executing migration" id="remove index def_org_id, current_state on alert_instance" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.587392638Z level=info msg="Migration successfully executed" id="remove index def_org_id, current_state on alert_instance" duration=1.013916ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.593420452Z level=info msg="Executing migration" id="rename def_org_id to rule_org_id in alert_instance" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.620715529Z level=info msg="Migration successfully executed" id="rename def_org_id to rule_org_id in alert_instance" duration=27.296747ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.627045611Z level=info msg="Executing migration" id="rename def_uid to rule_uid in alert_instance" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.654620205Z level=info msg="Migration successfully executed" id="rename def_uid to rule_uid in alert_instance" duration=27.573834ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.659656364Z level=info msg="Executing migration" id="add index rule_org_id, rule_uid, current_state on alert_instance" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.660616028Z level=info msg="Migration successfully executed" id="add index rule_org_id, rule_uid, current_state on alert_instance" duration=956.994µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.664739143Z level=info msg="Executing migration" id="add index rule_org_id, current_state on alert_instance" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.665746819Z level=info msg="Migration successfully executed" id="add index rule_org_id, current_state on alert_instance" duration=1.007717ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.673223919Z level=info msg="Executing migration" id="add current_reason column related to current_state" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.678896374Z level=info msg="Migration successfully executed" id="add current_reason column related to current_state" duration=5.671505ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.684149028Z level=info msg="Executing migration" id="add result_fingerprint column to alert_instance" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.689783082Z level=info msg="Migration successfully executed" id="add result_fingerprint column to alert_instance" duration=5.635254ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.693901048Z level=info msg="Executing migration" id="create alert_rule table" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.694960744Z level=info msg="Migration successfully executed" id="create alert_rule table" duration=1.058617ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.699698855Z level=info msg="Executing migration" id="add index in alert_rule on org_id and title columns" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.700863545Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and title columns" duration=1.16472ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.70575057Z level=info msg="Executing migration" id="add index in alert_rule on org_id and uid columns" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.707401673Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and uid columns" duration=1.651042ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.712541894Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.71398869Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" duration=1.447326ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.719619654Z level=info msg="Executing migration" id="alter alert_rule table data column to mediumtext in mysql" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.719736397Z level=info msg="Migration successfully executed" id="alter alert_rule table data column to mediumtext in mysql" duration=116.403µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.724616272Z level=info msg="Executing migration" id="add column for to alert_rule" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.730996475Z level=info msg="Migration successfully executed" id="add column for to alert_rule" duration=6.378632ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.735826728Z level=info msg="Executing migration" id="add column annotations to alert_rule" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.741662207Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule" duration=5.833109ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.745533036Z level=info msg="Executing migration" id="add column labels to alert_rule" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.751648991Z level=info msg="Migration successfully executed" id="add column labels to alert_rule" duration=6.117726ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.757128761Z level=info msg="Executing migration" id="remove unique index from alert_rule on org_id, title columns" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.758100117Z level=info msg="Migration successfully executed" id="remove unique index from alert_rule on org_id, title columns" duration=969.726µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.762498269Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespase_uid and title columns" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.763554676Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespase_uid and title columns" duration=1.055817ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.767063626Z level=info msg="Executing migration" id="add dashboard_uid column to alert_rule" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.772945976Z level=info msg="Migration successfully executed" id="add dashboard_uid column to alert_rule" duration=5.88143ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.778967369Z level=info msg="Executing migration" id="add panel_id column to alert_rule" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.784986382Z level=info msg="Migration successfully executed" id="add panel_id column to alert_rule" duration=6.017153ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.7887994Z level=info msg="Executing migration" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.789937609Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" duration=1.136089ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.796679112Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule" 23:16:46 kafka | ssl.engine.factory.class = null 23:16:46 kafka | ssl.key.password = null 23:16:46 kafka | ssl.keymanager.algorithm = SunX509 23:16:46 kafka | ssl.keystore.certificate.chain = null 23:16:46 kafka | ssl.keystore.key = null 23:16:46 kafka | ssl.keystore.location = null 23:16:46 kafka | ssl.keystore.password = null 23:16:46 kafka | ssl.keystore.type = JKS 23:16:46 kafka | ssl.principal.mapping.rules = DEFAULT 23:16:46 kafka | ssl.protocol = TLSv1.3 23:16:46 kafka | ssl.provider = null 23:16:46 kafka | ssl.secure.random.implementation = null 23:16:46 kafka | ssl.trustmanager.algorithm = PKIX 23:16:46 kafka | ssl.truststore.certificates = null 23:16:46 kafka | ssl.truststore.location = null 23:16:46 kafka | ssl.truststore.password = null 23:16:46 kafka | ssl.truststore.type = JKS 23:16:46 kafka | transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000 23:16:46 kafka | transaction.max.timeout.ms = 900000 23:16:46 kafka | transaction.partition.verification.enable = true 23:16:46 kafka | transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 23:16:46 kafka | transaction.state.log.load.buffer.size = 5242880 23:16:46 kafka | transaction.state.log.min.isr = 2 23:16:46 kafka | transaction.state.log.num.partitions = 50 23:16:46 kafka | transaction.state.log.replication.factor = 3 23:16:46 kafka | transaction.state.log.segment.bytes = 104857600 23:16:46 kafka | transactional.id.expiration.ms = 604800000 23:16:46 kafka | unclean.leader.election.enable = false 23:16:46 kafka | unstable.api.versions.enable = false 23:16:46 kafka | zookeeper.clientCnxnSocket = null 23:16:46 kafka | zookeeper.connect = zookeeper:2181 23:16:46 kafka | zookeeper.connection.timeout.ms = null 23:16:46 kafka | zookeeper.max.in.flight.requests = 10 23:16:46 kafka | zookeeper.metadata.migration.enable = false 23:16:46 kafka | zookeeper.session.timeout.ms = 18000 23:16:46 kafka | zookeeper.set.acl = false 23:16:46 kafka | zookeeper.ssl.cipher.suites = null 23:16:46 kafka | zookeeper.ssl.client.enable = false 23:16:46 kafka | zookeeper.ssl.crl.enable = false 23:16:46 kafka | zookeeper.ssl.enabled.protocols = null 23:16:46 kafka | zookeeper.ssl.endpoint.identification.algorithm = HTTPS 23:16:46 kafka | zookeeper.ssl.keystore.location = null 23:16:46 kafka | zookeeper.ssl.keystore.password = null 23:16:46 kafka | zookeeper.ssl.keystore.type = null 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.806793009Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule" duration=10.114987ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.812842744Z level=info msg="Executing migration" id="add is_paused column to alert_rule table" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.818741464Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule table" duration=5.89801ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.822268795Z level=info msg="Executing migration" id="fix is_paused column for alert_rule table" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.822337507Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule table" duration=69.062µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.825032026Z level=info msg="Executing migration" id="create alert_rule_version table" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.826291927Z level=info msg="Migration successfully executed" id="create alert_rule_version table" duration=1.255371ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.834932128Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.836099178Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=1.16643ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.841946417Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.84405237Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" duration=2.105533ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.848119845Z level=info msg="Executing migration" id="alter alert_rule_version table data column to mediumtext in mysql" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.848294309Z level=info msg="Migration successfully executed" id="alter alert_rule_version table data column to mediumtext in mysql" duration=175.964µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.853293397Z level=info msg="Executing migration" id="add column for to alert_rule_version" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.85967863Z level=info msg="Migration successfully executed" id="add column for to alert_rule_version" duration=6.384522ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.8632257Z level=info msg="Executing migration" id="add column annotations to alert_rule_version" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.869603943Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule_version" duration=6.377353ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.901235381Z level=info msg="Executing migration" id="add column labels to alert_rule_version" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.908800774Z level=info msg="Migration successfully executed" id="add column labels to alert_rule_version" duration=7.565773ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.912729794Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule_version" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.91887476Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule_version" duration=6.141176ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.924862733Z level=info msg="Executing migration" id="add is_paused column to alert_rule_versions table" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.931154385Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule_versions table" duration=6.290812ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.936765718Z level=info msg="Executing migration" id="fix is_paused column for alert_rule_version table" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.93683452Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule_version table" duration=69.152µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.941896498Z level=info msg="Executing migration" id=create_alert_configuration_table 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.943411927Z level=info msg="Migration successfully executed" id=create_alert_configuration_table duration=1.514519ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.948401205Z level=info msg="Executing migration" id="Add column default in alert_configuration" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.954697735Z level=info msg="Migration successfully executed" id="Add column default in alert_configuration" duration=6.29547ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.958147713Z level=info msg="Executing migration" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.958303117Z level=info msg="Migration successfully executed" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" duration=69.452µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.960989686Z level=info msg="Executing migration" id="add column org_id in alert_configuration" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.96744728Z level=info msg="Migration successfully executed" id="add column org_id in alert_configuration" duration=6.456024ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.976227605Z level=info msg="Executing migration" id="add index in alert_configuration table on org_id column" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.977937248Z level=info msg="Migration successfully executed" id="add index in alert_configuration table on org_id column" duration=1.708373ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.982632029Z level=info msg="Executing migration" id="add configuration_hash column to alert_configuration" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.989697178Z level=info msg="Migration successfully executed" id="add configuration_hash column to alert_configuration" duration=7.06305ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.993228179Z level=info msg="Executing migration" id=create_ngalert_configuration_table 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.994000859Z level=info msg="Migration successfully executed" id=create_ngalert_configuration_table duration=772.229µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.997665032Z level=info msg="Executing migration" id="add index in ngalert_configuration on org_id column" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:16.998630537Z level=info msg="Migration successfully executed" id="add index in ngalert_configuration on org_id column" duration=965.165µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.004691699Z level=info msg="Executing migration" id="add column send_alerts_to in ngalert_configuration" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.011186846Z level=info msg="Migration successfully executed" id="add column send_alerts_to in ngalert_configuration" duration=6.496857ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.016477754Z level=info msg="Executing migration" id="create provenance_type table" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.017303901Z level=info msg="Migration successfully executed" id="create provenance_type table" duration=825.407µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.025738754Z level=info msg="Executing migration" id="add index to uniquify (record_key, record_type, org_id) columns" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.028601353Z level=info msg="Migration successfully executed" id="add index to uniquify (record_key, record_type, org_id) columns" duration=2.862448ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.034547014Z level=info msg="Executing migration" id="create alert_image table" 23:16:46 policy-pap | ssl.keystore.password = null 23:16:46 policy-pap | ssl.keystore.type = JKS 23:16:46 policy-pap | ssl.protocol = TLSv1.3 23:16:46 policy-pap | ssl.provider = null 23:16:46 policy-pap | ssl.secure.random.implementation = null 23:16:46 policy-pap | ssl.trustmanager.algorithm = PKIX 23:16:46 policy-pap | ssl.truststore.certificates = null 23:16:46 policy-pap | ssl.truststore.location = null 23:16:46 policy-pap | ssl.truststore.password = null 23:16:46 policy-pap | ssl.truststore.type = JKS 23:16:46 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:46 policy-pap | 23:16:46 policy-pap | [2024-03-11T23:14:47.784+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 23:16:46 policy-pap | [2024-03-11T23:14:47.785+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 23:16:46 policy-pap | [2024-03-11T23:14:47.785+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1710198887784 23:16:46 policy-pap | [2024-03-11T23:14:47.785+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-a774452f-60f0-41d2-bd1a-6ce78860e297-3, groupId=a774452f-60f0-41d2-bd1a-6ce78860e297] Subscribed to topic(s): policy-pdp-pap 23:16:46 policy-pap | [2024-03-11T23:14:47.786+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Message Dispatcher 23:16:46 policy-pap | [2024-03-11T23:14:47.786+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=d1d543f4-2928-4033-8435-8d4bd1402861, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@3f2ab6ec 23:16:46 policy-pap | [2024-03-11T23:14:47.786+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=d1d543f4-2928-4033-8435-8d4bd1402861, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 23:16:46 policy-pap | [2024-03-11T23:14:47.787+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 23:16:46 policy-pap | allow.auto.create.topics = true 23:16:46 policy-pap | auto.commit.interval.ms = 5000 23:16:46 policy-pap | auto.include.jmx.reporter = true 23:16:46 policy-pap | auto.offset.reset = latest 23:16:46 policy-pap | bootstrap.servers = [kafka:9092] 23:16:46 policy-pap | check.crcs = true 23:16:46 policy-pap | client.dns.lookup = use_all_dns_ips 23:16:46 policy-pap | client.id = consumer-policy-pap-4 23:16:46 policy-pap | client.rack = 23:16:46 policy-pap | connections.max.idle.ms = 540000 23:16:46 policy-pap | default.api.timeout.ms = 60000 23:16:46 policy-pap | enable.auto.commit = true 23:16:46 policy-pap | exclude.internal.topics = true 23:16:46 policy-pap | fetch.max.bytes = 52428800 23:16:46 policy-pap | fetch.max.wait.ms = 500 23:16:46 policy-pap | fetch.min.bytes = 1 23:16:46 policy-pap | group.id = policy-pap 23:16:46 policy-pap | group.instance.id = null 23:16:46 policy-pap | heartbeat.interval.ms = 3000 23:16:46 policy-pap | interceptor.classes = [] 23:16:46 policy-pap | internal.leave.group.on.close = true 23:16:46 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 23:16:46 policy-pap | isolation.level = read_uncommitted 23:16:46 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:46 policy-pap | max.partition.fetch.bytes = 1048576 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.035483683Z level=info msg="Migration successfully executed" id="create alert_image table" duration=937.419µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.040549637Z level=info msg="Executing migration" id="add unique index on token to alert_image table" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.04269191Z level=info msg="Migration successfully executed" id="add unique index on token to alert_image table" duration=2.148923ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.04657575Z level=info msg="Executing migration" id="support longer URLs in alert_image table" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.046648781Z level=info msg="Migration successfully executed" id="support longer URLs in alert_image table" duration=73.902µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.053030841Z level=info msg="Executing migration" id=create_alert_configuration_history_table 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.054650714Z level=info msg="Migration successfully executed" id=create_alert_configuration_history_table duration=1.617723ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.060246998Z level=info msg="Executing migration" id="drop non-unique orgID index on alert_configuration" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.062030514Z level=info msg="Migration successfully executed" id="drop non-unique orgID index on alert_configuration" duration=1.783866ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.065881433Z level=info msg="Executing migration" id="drop unique orgID index on alert_configuration if exists" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.066447354Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop unique orgID index on alert_configuration if exists" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.071794684Z level=info msg="Executing migration" id="extract alertmanager configuration history to separate table" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.07310335Z level=info msg="Migration successfully executed" id="extract alertmanager configuration history to separate table" duration=1.313047ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.07748779Z level=info msg="Executing migration" id="add unique index on orgID to alert_configuration" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.080549232Z level=info msg="Migration successfully executed" id="add unique index on orgID to alert_configuration" duration=3.068063ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.086472643Z level=info msg="Executing migration" id="add last_applied column to alert_configuration_history" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.09569743Z level=info msg="Migration successfully executed" id="add last_applied column to alert_configuration_history" duration=9.230058ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.103350777Z level=info msg="Executing migration" id="create library_element table v1" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.104409088Z level=info msg="Migration successfully executed" id="create library_element table v1" duration=1.057971ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.108214136Z level=info msg="Executing migration" id="add index library_element org_id-folder_id-name-kind" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.110788749Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_id-name-kind" duration=2.573672ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.115882992Z level=info msg="Executing migration" id="create library_element_connection table v1" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.116887423Z level=info msg="Migration successfully executed" id="create library_element_connection table v1" duration=1.004062ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.121510946Z level=info msg="Executing migration" id="add index library_element_connection element_id-kind-connection_id" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.123043598Z level=info msg="Migration successfully executed" id="add index library_element_connection element_id-kind-connection_id" duration=1.531082ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.127886687Z level=info msg="Executing migration" id="add unique index library_element org_id_uid" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.129648502Z level=info msg="Migration successfully executed" id="add unique index library_element org_id_uid" duration=1.761295ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.134770957Z level=info msg="Executing migration" id="increase max description length to 2048" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.134800487Z level=info msg="Migration successfully executed" id="increase max description length to 2048" duration=30.7µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.140858001Z level=info msg="Executing migration" id="alter library_element model to mediumtext" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.141125266Z level=info msg="Migration successfully executed" id="alter library_element model to mediumtext" duration=264.855µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.148353454Z level=info msg="Executing migration" id="clone move dashboard alerts to unified alerting" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.149069978Z level=info msg="Migration successfully executed" id="clone move dashboard alerts to unified alerting" duration=716.604µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.153927777Z level=info msg="Executing migration" id="create data_keys table" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.155752885Z level=info msg="Migration successfully executed" id="create data_keys table" duration=1.829238ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.160416949Z level=info msg="Executing migration" id="create secrets table" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.161424671Z level=info msg="Migration successfully executed" id="create secrets table" duration=1.007392ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.164922552Z level=info msg="Executing migration" id="rename data_keys name column to id" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.198848043Z level=info msg="Migration successfully executed" id="rename data_keys name column to id" duration=33.922661ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.202501207Z level=info msg="Executing migration" id="add name column into data_keys" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.208007899Z level=info msg="Migration successfully executed" id="add name column into data_keys" duration=5.502132ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.212683385Z level=info msg="Executing migration" id="copy data_keys id column values into name" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.213037612Z level=info msg="Migration successfully executed" id="copy data_keys id column values into name" duration=354.057µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.220453084Z level=info msg="Executing migration" id="rename data_keys name column to label" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.254122608Z level=info msg="Migration successfully executed" id="rename data_keys name column to label" duration=33.670734ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.288538217Z level=info msg="Executing migration" id="rename data_keys id column back to name" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.321001758Z level=info msg="Migration successfully executed" id="rename data_keys id column back to name" duration=32.465361ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.327686023Z level=info msg="Executing migration" id="create kv_store table v1" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.328881198Z level=info msg="Migration successfully executed" id="create kv_store table v1" duration=1.192175ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.333026652Z level=info msg="Executing migration" id="add index kv_store.org_id-namespace-key" 23:16:46 policy-pap | max.poll.interval.ms = 300000 23:16:46 policy-pap | max.poll.records = 500 23:16:46 policy-pap | metadata.max.age.ms = 300000 23:16:46 policy-pap | metric.reporters = [] 23:16:46 policy-pap | metrics.num.samples = 2 23:16:46 policy-pap | metrics.recording.level = INFO 23:16:46 policy-pap | metrics.sample.window.ms = 30000 23:16:46 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 23:16:46 policy-pap | receive.buffer.bytes = 65536 23:16:46 policy-pap | reconnect.backoff.max.ms = 1000 23:16:46 policy-pap | reconnect.backoff.ms = 50 23:16:46 policy-pap | request.timeout.ms = 30000 23:16:46 policy-pap | retry.backoff.ms = 100 23:16:46 policy-pap | sasl.client.callback.handler.class = null 23:16:46 policy-pap | sasl.jaas.config = null 23:16:46 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:46 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 23:16:46 policy-pap | sasl.kerberos.service.name = null 23:16:46 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:46 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:46 policy-pap | sasl.login.callback.handler.class = null 23:16:46 policy-pap | sasl.login.class = null 23:16:46 policy-pap | sasl.login.connect.timeout.ms = null 23:16:46 policy-pap | sasl.login.read.timeout.ms = null 23:16:46 policy-pap | sasl.login.refresh.buffer.seconds = 300 23:16:46 policy-pap | sasl.login.refresh.min.period.seconds = 60 23:16:46 policy-pap | sasl.login.refresh.window.factor = 0.8 23:16:46 policy-pap | sasl.login.refresh.window.jitter = 0.05 23:16:46 policy-pap | sasl.login.retry.backoff.max.ms = 10000 23:16:46 policy-pap | sasl.login.retry.backoff.ms = 100 23:16:46 policy-pap | sasl.mechanism = GSSAPI 23:16:46 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 23:16:46 policy-pap | sasl.oauthbearer.expected.audience = null 23:16:46 policy-pap | sasl.oauthbearer.expected.issuer = null 23:16:46 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:46 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:46 kafka | zookeeper.ssl.ocsp.enable = false 23:16:46 kafka | zookeeper.ssl.protocol = TLSv1.2 23:16:46 kafka | zookeeper.ssl.truststore.location = null 23:16:46 kafka | zookeeper.ssl.truststore.password = null 23:16:46 kafka | zookeeper.ssl.truststore.type = null 23:16:46 kafka | (kafka.server.KafkaConfig) 23:16:46 kafka | [2024-03-11 23:14:13,779] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 23:16:46 kafka | [2024-03-11 23:14:13,780] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 23:16:46 kafka | [2024-03-11 23:14:13,781] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 23:16:46 kafka | [2024-03-11 23:14:13,784] INFO [ThrottledChannelReaper-ControllerMutation]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 23:16:46 kafka | [2024-03-11 23:14:13,820] INFO Loading logs from log dirs ArraySeq(/var/lib/kafka/data) (kafka.log.LogManager) 23:16:46 kafka | [2024-03-11 23:14:13,825] INFO No logs found to be loaded in /var/lib/kafka/data (kafka.log.LogManager) 23:16:46 kafka | [2024-03-11 23:14:13,835] INFO Loaded 0 logs in 15ms (kafka.log.LogManager) 23:16:46 kafka | [2024-03-11 23:14:13,837] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager) 23:16:46 kafka | [2024-03-11 23:14:13,839] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager) 23:16:46 kafka | [2024-03-11 23:14:13,850] INFO Starting the log cleaner (kafka.log.LogCleaner) 23:16:46 kafka | [2024-03-11 23:14:13,897] INFO [kafka-log-cleaner-thread-0]: Starting (kafka.log.LogCleaner$CleanerThread) 23:16:46 kafka | [2024-03-11 23:14:13,934] INFO [feature-zk-node-event-process-thread]: Starting (kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread) 23:16:46 kafka | [2024-03-11 23:14:13,948] INFO Feature ZK node at path: /feature does not exist (kafka.server.FinalizedFeatureChangeListener) 23:16:46 kafka | [2024-03-11 23:14:13,975] INFO [zk-broker-1-to-controller-forwarding-channel-manager]: Starting (kafka.server.BrokerToControllerRequestThread) 23:16:46 kafka | [2024-03-11 23:14:14,329] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) 23:16:46 kafka | [2024-03-11 23:14:14,355] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT) (kafka.network.SocketServer) 23:16:46 kafka | [2024-03-11 23:14:14,355] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) 23:16:46 kafka | [2024-03-11 23:14:14,364] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT_HOST) (kafka.network.SocketServer) 23:16:46 kafka | [2024-03-11 23:14:14,369] INFO [zk-broker-1-to-controller-alter-partition-channel-manager]: Starting (kafka.server.BrokerToControllerRequestThread) 23:16:46 kafka | [2024-03-11 23:14:14,396] INFO [ExpirationReaper-1-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:16:46 kafka | [2024-03-11 23:14:14,398] INFO [ExpirationReaper-1-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:16:46 kafka | [2024-03-11 23:14:14,399] INFO [ExpirationReaper-1-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:16:46 kafka | [2024-03-11 23:14:14,401] INFO [ExpirationReaper-1-ElectLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:16:46 kafka | [2024-03-11 23:14:14,405] INFO [ExpirationReaper-1-RemoteFetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:16:46 kafka | [2024-03-11 23:14:14,420] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler) 23:16:46 kafka | [2024-03-11 23:14:14,422] INFO [AddPartitionsToTxnSenderThread-1]: Starting (kafka.server.AddPartitionsToTxnManager) 23:16:46 kafka | [2024-03-11 23:14:14,451] INFO Creating /brokers/ids/1 (is it secure? false) (kafka.zk.KafkaZkClient) 23:16:46 kafka | [2024-03-11 23:14:14,481] INFO Stat of the created znode at /brokers/ids/1 is: 27,27,1710198854466,1710198854466,1,0,0,72057609563865089,258,0,27 23:16:46 kafka | (kafka.zk.KafkaZkClient) 23:16:46 kafka | [2024-03-11 23:14:14,482] INFO Registered broker 1 at path /brokers/ids/1 with addresses: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092, czxid (broker epoch): 27 (kafka.zk.KafkaZkClient) 23:16:46 kafka | [2024-03-11 23:14:14,536] INFO [ControllerEventThread controllerId=1] Starting (kafka.controller.ControllerEventManager$ControllerEventThread) 23:16:46 kafka | [2024-03-11 23:14:14,547] INFO [ExpirationReaper-1-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:16:46 kafka | [2024-03-11 23:14:14,555] INFO [ExpirationReaper-1-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:16:46 kafka | [2024-03-11 23:14:14,557] INFO [ExpirationReaper-1-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:16:46 kafka | [2024-03-11 23:14:14,567] INFO Successfully created /controller_epoch with initial epoch 0 (kafka.zk.KafkaZkClient) 23:16:46 kafka | [2024-03-11 23:14:14,573] INFO [GroupCoordinator 1]: Starting up. (kafka.coordinator.group.GroupCoordinator) 23:16:46 kafka | [2024-03-11 23:14:14,583] INFO [Controller id=1] 1 successfully elected as the controller. Epoch incremented to 1 and epoch zk version is now 1 (kafka.controller.KafkaController) 23:16:46 kafka | [2024-03-11 23:14:14,585] INFO [GroupCoordinator 1]: Startup complete. (kafka.coordinator.group.GroupCoordinator) 23:16:46 kafka | [2024-03-11 23:14:14,588] INFO [Controller id=1] Creating FeatureZNode at path: /feature with contents: FeatureZNode(2,Enabled,Map()) (kafka.controller.KafkaController) 23:16:46 kafka | [2024-03-11 23:14:14,594] INFO Feature ZK node created at path: /feature (kafka.server.FinalizedFeatureChangeListener) 23:16:46 kafka | [2024-03-11 23:14:14,612] INFO [TransactionCoordinator id=1] Starting up. (kafka.coordinator.transaction.TransactionCoordinator) 23:16:46 kafka | [2024-03-11 23:14:14,616] INFO [TxnMarkerSenderThread-1]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager) 23:16:46 kafka | [2024-03-11 23:14:14,616] INFO [TransactionCoordinator id=1] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator) 23:16:46 kafka | [2024-03-11 23:14:14,635] INFO [MetadataCache brokerId=1] Updated cache from existing None to latest Features(version=3.6-IV2, finalizedFeatures={}, finalizedFeaturesEpoch=0). (kafka.server.metadata.ZkMetadataCache) 23:16:46 kafka | [2024-03-11 23:14:14,635] INFO [Controller id=1] Registering handlers (kafka.controller.KafkaController) 23:16:46 kafka | [2024-03-11 23:14:14,643] INFO [Controller id=1] Deleting log dir event notifications (kafka.controller.KafkaController) 23:16:46 kafka | [2024-03-11 23:14:14,650] INFO [Controller id=1] Deleting isr change notifications (kafka.controller.KafkaController) 23:16:46 kafka | [2024-03-11 23:14:14,653] INFO [Controller id=1] Initializing controller context (kafka.controller.KafkaController) 23:16:46 kafka | [2024-03-11 23:14:14,676] INFO [ExpirationReaper-1-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:16:46 kafka | [2024-03-11 23:14:14,677] INFO [Controller id=1] Initialized broker epochs cache: HashMap(1 -> 27) (kafka.controller.KafkaController) 23:16:46 kafka | [2024-03-11 23:14:14,686] DEBUG [Controller id=1] Register BrokerModifications handler for Set(1) (kafka.controller.KafkaController) 23:16:46 kafka | [2024-03-11 23:14:14,695] DEBUG [Channel manager on controller 1]: Controller 1 trying to connect to broker 1 (kafka.controller.ControllerChannelManager) 23:16:46 kafka | [2024-03-11 23:14:14,709] INFO [Controller id=1] Currently active brokers in the cluster: Set(1) (kafka.controller.KafkaController) 23:16:46 kafka | [2024-03-11 23:14:14,709] INFO [Controller id=1] Currently shutting brokers in the cluster: HashSet() (kafka.controller.KafkaController) 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.335077374Z level=info msg="Migration successfully executed" id="add index kv_store.org_id-namespace-key" duration=2.050762ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.342385302Z level=info msg="Executing migration" id="update dashboard_uid and panel_id from existing annotations" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.342730289Z level=info msg="Migration successfully executed" id="update dashboard_uid and panel_id from existing annotations" duration=341.576µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.346599388Z level=info msg="Executing migration" id="create permission table" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.348164729Z level=info msg="Migration successfully executed" id="create permission table" duration=1.563951ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.354660412Z level=info msg="Executing migration" id="add unique index permission.role_id" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.355962458Z level=info msg="Migration successfully executed" id="add unique index permission.role_id" duration=1.304316ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.361237185Z level=info msg="Executing migration" id="add unique index role_id_action_scope" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.362356547Z level=info msg="Migration successfully executed" id="add unique index role_id_action_scope" duration=1.118672ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.367211306Z level=info msg="Executing migration" id="create role table" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.368203487Z level=info msg="Migration successfully executed" id="create role table" duration=989.371µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.371975004Z level=info msg="Executing migration" id="add column display_name" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.381833433Z level=info msg="Migration successfully executed" id="add column display_name" duration=9.857149ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.385389896Z level=info msg="Executing migration" id="add column group_name" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.392447429Z level=info msg="Migration successfully executed" id="add column group_name" duration=7.055003ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.397705096Z level=info msg="Executing migration" id="add index role.org_id" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.399485273Z level=info msg="Migration successfully executed" id="add index role.org_id" duration=1.780647ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.405323771Z level=info msg="Executing migration" id="add unique index role_org_id_name" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.406532026Z level=info msg="Migration successfully executed" id="add unique index role_org_id_name" duration=1.207104ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.410116819Z level=info msg="Executing migration" id="add index role_org_id_uid" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.411344484Z level=info msg="Migration successfully executed" id="add index role_org_id_uid" duration=1.228555ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.41512199Z level=info msg="Executing migration" id="create team role table" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.41609484Z level=info msg="Migration successfully executed" id="create team role table" duration=969.85µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.4205164Z level=info msg="Executing migration" id="add index team_role.org_id" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.421681594Z level=info msg="Migration successfully executed" id="add index team_role.org_id" duration=1.163494ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.425261086Z level=info msg="Executing migration" id="add unique index team_role_org_id_team_id_role_id" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.426477271Z level=info msg="Migration successfully executed" id="add unique index team_role_org_id_team_id_role_id" duration=1.214665ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.431122936Z level=info msg="Executing migration" id="add index team_role.team_id" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.432943483Z level=info msg="Migration successfully executed" id="add index team_role.team_id" duration=1.819008ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.437121377Z level=info msg="Executing migration" id="create user role table" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.438523086Z level=info msg="Migration successfully executed" id="create user role table" duration=1.401589ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.442454476Z level=info msg="Executing migration" id="add index user_role.org_id" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.443547088Z level=info msg="Migration successfully executed" id="add index user_role.org_id" duration=1.093492ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.449385477Z level=info msg="Executing migration" id="add unique index user_role_org_id_user_id_role_id" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.451136292Z level=info msg="Migration successfully executed" id="add unique index user_role_org_id_user_id_role_id" duration=1.749785ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.457721877Z level=info msg="Executing migration" id="add index user_role.user_id" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.459832859Z level=info msg="Migration successfully executed" id="add index user_role.user_id" duration=2.115053ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.464030015Z level=info msg="Executing migration" id="create builtin role table" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.465642647Z level=info msg="Migration successfully executed" id="create builtin role table" duration=1.608153ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.469493226Z level=info msg="Executing migration" id="add index builtin_role.role_id" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.470660269Z level=info msg="Migration successfully executed" id="add index builtin_role.role_id" duration=1.166333ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.475352065Z level=info msg="Executing migration" id="add index builtin_role.name" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.476864565Z level=info msg="Migration successfully executed" id="add index builtin_role.name" duration=1.5086ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.481152092Z level=info msg="Executing migration" id="Add column org_id to builtin_role table" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.490850609Z level=info msg="Migration successfully executed" id="Add column org_id to builtin_role table" duration=9.698527ms 23:16:46 kafka | [2024-03-11 23:14:14,709] INFO [Controller id=1] Current list of topics in the cluster: HashSet() (kafka.controller.KafkaController) 23:16:46 kafka | [2024-03-11 23:14:14,710] INFO [Controller id=1] Fetching topic deletions in progress (kafka.controller.KafkaController) 23:16:46 kafka | [2024-03-11 23:14:14,710] INFO [RequestSendThread controllerId=1] Starting (kafka.controller.RequestSendThread) 23:16:46 kafka | [2024-03-11 23:14:14,711] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread) 23:16:46 kafka | [2024-03-11 23:14:14,714] INFO [Controller id=1] List of topics to be deleted: (kafka.controller.KafkaController) 23:16:46 kafka | [2024-03-11 23:14:14,714] INFO [Controller id=1] List of topics ineligible for deletion: (kafka.controller.KafkaController) 23:16:46 kafka | [2024-03-11 23:14:14,715] INFO [Controller id=1] Initializing topic deletion manager (kafka.controller.KafkaController) 23:16:46 kafka | [2024-03-11 23:14:14,715] INFO [Topic Deletion Manager 1] Initializing manager with initial deletions: Set(), initial ineligible deletions: HashSet() (kafka.controller.TopicDeletionManager) 23:16:46 kafka | [2024-03-11 23:14:14,716] INFO [Controller id=1] Sending update metadata request (kafka.controller.KafkaController) 23:16:46 kafka | [2024-03-11 23:14:14,719] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 0 partitions (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:14,728] INFO [ReplicaStateMachine controllerId=1] Initializing replica state (kafka.controller.ZkReplicaStateMachine) 23:16:46 kafka | [2024-03-11 23:14:14,729] INFO [ReplicaStateMachine controllerId=1] Triggering online replica state changes (kafka.controller.ZkReplicaStateMachine) 23:16:46 kafka | [2024-03-11 23:14:14,736] INFO [ReplicaStateMachine controllerId=1] Triggering offline replica state changes (kafka.controller.ZkReplicaStateMachine) 23:16:46 kafka | [2024-03-11 23:14:14,742] DEBUG [ReplicaStateMachine controllerId=1] Started replica state machine with initial state -> HashMap() (kafka.controller.ZkReplicaStateMachine) 23:16:46 kafka | [2024-03-11 23:14:14,742] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Enabling request processing. (kafka.network.SocketServer) 23:16:46 kafka | [2024-03-11 23:14:14,742] INFO [PartitionStateMachine controllerId=1] Initializing partition state (kafka.controller.ZkPartitionStateMachine) 23:16:46 kafka | [2024-03-11 23:14:14,743] INFO [PartitionStateMachine controllerId=1] Triggering online partition state changes (kafka.controller.ZkPartitionStateMachine) 23:16:46 kafka | [2024-03-11 23:14:14,746] INFO Awaiting socket connections on 0.0.0.0:29092. (kafka.network.DataPlaneAcceptor) 23:16:46 kafka | [2024-03-11 23:14:14,747] DEBUG [PartitionStateMachine controllerId=1] Started partition state machine with initial state -> HashMap() (kafka.controller.ZkPartitionStateMachine) 23:16:46 kafka | [2024-03-11 23:14:14,747] INFO [Controller id=1] Ready to serve as the new controller with epoch 1 (kafka.controller.KafkaController) 23:16:46 kafka | [2024-03-11 23:14:14,754] INFO [Controller id=1] Partitions undergoing preferred replica election: (kafka.controller.KafkaController) 23:16:46 kafka | [2024-03-11 23:14:14,755] INFO [Controller id=1] Partitions that completed preferred replica election: (kafka.controller.KafkaController) 23:16:46 kafka | [2024-03-11 23:14:14,755] INFO [Controller id=1] Skipping preferred replica election for partitions due to topic deletion: (kafka.controller.KafkaController) 23:16:46 kafka | [2024-03-11 23:14:14,755] INFO [Controller id=1] Resuming preferred replica election for partitions: (kafka.controller.KafkaController) 23:16:46 kafka | [2024-03-11 23:14:14,756] INFO [Controller id=1] Starting replica leader election (PREFERRED) for partitions triggered by ZkTriggered (kafka.controller.KafkaController) 23:16:46 kafka | [2024-03-11 23:14:14,760] INFO [Controller id=1, targetBrokerId=1] Node 1 disconnected. (org.apache.kafka.clients.NetworkClient) 23:16:46 kafka | [2024-03-11 23:14:14,761] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.DataPlaneAcceptor) 23:16:46 kafka | [2024-03-11 23:14:14,769] WARN [Controller id=1, targetBrokerId=1] Connection to node 1 (kafka/172.17.0.6:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient) 23:16:46 kafka | [2024-03-11 23:14:14,771] WARN [RequestSendThread controllerId=1] Controller 1's connection to broker kafka:9092 (id: 1 rack: null) was unsuccessful (kafka.controller.RequestSendThread) 23:16:46 kafka | java.io.IOException: Connection to kafka:9092 (id: 1 rack: null) failed. 23:16:46 kafka | at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:70) 23:16:46 kafka | at kafka.controller.RequestSendThread.brokerReady(ControllerChannelManager.scala:298) 23:16:46 kafka | at kafka.controller.RequestSendThread.doWork(ControllerChannelManager.scala:251) 23:16:46 kafka | at org.apache.kafka.server.util.ShutdownableThread.run(ShutdownableThread.java:130) 23:16:46 kafka | [2024-03-11 23:14:14,773] INFO [Controller id=1] Starting the controller scheduler (kafka.controller.KafkaController) 23:16:46 kafka | [2024-03-11 23:14:14,776] INFO [Controller id=1, targetBrokerId=1] Client requested connection close from node 1 (org.apache.kafka.clients.NetworkClient) 23:16:46 kafka | [2024-03-11 23:14:14,779] INFO Kafka version: 7.6.0-ccs (org.apache.kafka.common.utils.AppInfoParser) 23:16:46 kafka | [2024-03-11 23:14:14,779] INFO Kafka commitId: 1991cb733c81d6791626f88253a042b2ec835ab8 (org.apache.kafka.common.utils.AppInfoParser) 23:16:46 kafka | [2024-03-11 23:14:14,779] INFO Kafka startTimeMs: 1710198854768 (org.apache.kafka.common.utils.AppInfoParser) 23:16:46 kafka | [2024-03-11 23:14:14,782] INFO [KafkaServer id=1] started (kafka.server.KafkaServer) 23:16:46 kafka | [2024-03-11 23:14:14,886] INFO [RequestSendThread controllerId=1] Controller 1 connected to kafka:9092 (id: 1 rack: null) for sending state change requests (kafka.controller.RequestSendThread) 23:16:46 kafka | [2024-03-11 23:14:14,992] INFO [zk-broker-1-to-controller-forwarding-channel-manager]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) 23:16:46 kafka | [2024-03-11 23:14:14,999] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 0 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:15,077] INFO [zk-broker-1-to-controller-alter-partition-channel-manager]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) 23:16:46 kafka | [2024-03-11 23:14:19,778] INFO [Controller id=1] Processing automatic preferred replica leader election (kafka.controller.KafkaController) 23:16:46 kafka | [2024-03-11 23:14:19,779] TRACE [Controller id=1] Checking need to trigger auto leader balancing (kafka.controller.KafkaController) 23:16:46 kafka | [2024-03-11 23:14:48,324] INFO Creating topic policy-pdp-pap with configuration {} and initial partition assignment HashMap(0 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) 23:16:46 kafka | [2024-03-11 23:14:48,327] DEBUG [Controller id=1] There is no producerId block yet (Zk path version 0), creating the first block (kafka.controller.KafkaController) 23:16:46 kafka | [2024-03-11 23:14:48,331] INFO Creating topic __consumer_offsets with configuration {compression.type=producer, cleanup.policy=compact, segment.bytes=104857600} and initial partition assignment HashMap(0 -> ArrayBuffer(1), 1 -> ArrayBuffer(1), 2 -> ArrayBuffer(1), 3 -> ArrayBuffer(1), 4 -> ArrayBuffer(1), 5 -> ArrayBuffer(1), 6 -> ArrayBuffer(1), 7 -> ArrayBuffer(1), 8 -> ArrayBuffer(1), 9 -> ArrayBuffer(1), 10 -> ArrayBuffer(1), 11 -> ArrayBuffer(1), 12 -> ArrayBuffer(1), 13 -> ArrayBuffer(1), 14 -> ArrayBuffer(1), 15 -> ArrayBuffer(1), 16 -> ArrayBuffer(1), 17 -> ArrayBuffer(1), 18 -> ArrayBuffer(1), 19 -> ArrayBuffer(1), 20 -> ArrayBuffer(1), 21 -> ArrayBuffer(1), 22 -> ArrayBuffer(1), 23 -> ArrayBuffer(1), 24 -> ArrayBuffer(1), 25 -> ArrayBuffer(1), 26 -> ArrayBuffer(1), 27 -> ArrayBuffer(1), 28 -> ArrayBuffer(1), 29 -> ArrayBuffer(1), 30 -> ArrayBuffer(1), 31 -> ArrayBuffer(1), 32 -> ArrayBuffer(1), 33 -> ArrayBuffer(1), 34 -> ArrayBuffer(1), 35 -> ArrayBuffer(1), 36 -> ArrayBuffer(1), 37 -> ArrayBuffer(1), 38 -> ArrayBuffer(1), 39 -> ArrayBuffer(1), 40 -> ArrayBuffer(1), 41 -> ArrayBuffer(1), 42 -> ArrayBuffer(1), 43 -> ArrayBuffer(1), 44 -> ArrayBuffer(1), 45 -> ArrayBuffer(1), 46 -> ArrayBuffer(1), 47 -> ArrayBuffer(1), 48 -> ArrayBuffer(1), 49 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) 23:16:46 kafka | [2024-03-11 23:14:48,337] INFO [Controller id=1] Acquired new producerId block ProducerIdsBlock(assignedBrokerId=1, firstProducerId=0, size=1000) by writing to Zk with path version 1 (kafka.controller.KafkaController) 23:16:46 kafka | [2024-03-11 23:14:48,373] INFO [Controller id=1] New topics: [Set(policy-pdp-pap, __consumer_offsets)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(policy-pdp-pap,Some(MS889OzURY6CvMQUxdh95w),Map(policy-pdp-pap-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))), TopicIdReplicaAssignment(__consumer_offsets,Some(pKZWw3Z-TCqlQNYpHAZfkA),HashMap(__consumer_offsets-22 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-30 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-25 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-35 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-37 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-38 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-13 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-8 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-21 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-4 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-27 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-7 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-9 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-46 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-41 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-33 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-23 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-49 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-47 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-16 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-28 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-31 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-36 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-42 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-3 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-18 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-15 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-24 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-17 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-48 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-19 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-11 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-2 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-43 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-6 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-14 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-20 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-44 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-39 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-12 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-45 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-1 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-5 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-26 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-29 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-34 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-10 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-32 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-40 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) 23:16:46 kafka | [2024-03-11 23:14:48,375] INFO [Controller id=1] New partition creation callback for __consumer_offsets-22,__consumer_offsets-30,__consumer_offsets-25,__consumer_offsets-35,__consumer_offsets-38,__consumer_offsets-13,__consumer_offsets-8,__consumer_offsets-21,__consumer_offsets-4,__consumer_offsets-27,__consumer_offsets-7,__consumer_offsets-9,__consumer_offsets-46,__consumer_offsets-41,__consumer_offsets-33,__consumer_offsets-23,__consumer_offsets-49,__consumer_offsets-47,__consumer_offsets-16,__consumer_offsets-28,__consumer_offsets-31,__consumer_offsets-36,__consumer_offsets-42,__consumer_offsets-3,__consumer_offsets-18,__consumer_offsets-37,policy-pdp-pap-0,__consumer_offsets-15,__consumer_offsets-24,__consumer_offsets-17,__consumer_offsets-48,__consumer_offsets-19,__consumer_offsets-11,__consumer_offsets-2,__consumer_offsets-43,__consumer_offsets-6,__consumer_offsets-14,__consumer_offsets-20,__consumer_offsets-0,__consumer_offsets-44,__consumer_offsets-39,__consumer_offsets-12,__consumer_offsets-45,__consumer_offsets-1,__consumer_offsets-5,__consumer_offsets-26,__consumer_offsets-29,__consumer_offsets-34,__consumer_offsets-10,__consumer_offsets-32,__consumer_offsets-40 (kafka.controller.KafkaController) 23:16:46 kafka | [2024-03-11 23:14:48,378] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,378] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,379] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,379] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,379] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,379] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,379] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,379] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,379] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,379] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,379] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,379] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,379] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,380] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,380] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,380] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,380] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,380] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,380] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,380] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,380] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,381] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,381] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,381] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,381] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,381] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,381] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,381] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,381] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,381] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,381] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,381] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | > upgrade 0370-jpatoscarelationshiptype_metadata.sql 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarelationshiptype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | > upgrade 0380-jpatoscarelationshiptype_properties.sql 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarelationshiptype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | > upgrade 0390-jpatoscarequirement_metadata.sql 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | > upgrade 0400-jpatoscarequirement_occurrences.sql 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_occurrences (name VARCHAR(120) NULL, version VARCHAR(20) NULL, OCCURRENCES INT DEFAULT NULL) 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | > upgrade 0410-jpatoscarequirement_properties.sql 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | > upgrade 0420-jpatoscaservicetemplate_metadata.sql 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaservicetemplate_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | > upgrade 0430-jpatoscatopologytemplate_inputs.sql 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscatopologytemplate_inputs (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, INPUTS LONGBLOB DEFAULT NULL, INPUTS_KEY VARCHAR(255) NULL) 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | 23:16:46 kafka | [2024-03-11 23:14:48,381] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,381] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,382] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,382] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,382] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,382] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,382] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,382] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,382] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,382] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,382] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,382] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,382] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:46 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:46 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 23:16:46 policy-pap | sasl.oauthbearer.scope.claim.name = scope 23:16:46 policy-pap | sasl.oauthbearer.sub.claim.name = sub 23:16:46 policy-pap | sasl.oauthbearer.token.endpoint.url = null 23:16:46 policy-pap | security.protocol = PLAINTEXT 23:16:46 policy-pap | security.providers = null 23:16:46 policy-pap | send.buffer.bytes = 131072 23:16:46 policy-pap | session.timeout.ms = 45000 23:16:46 policy-pap | socket.connection.setup.timeout.max.ms = 30000 23:16:46 policy-pap | socket.connection.setup.timeout.ms = 10000 23:16:46 policy-pap | ssl.cipher.suites = null 23:16:46 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:46 policy-pap | ssl.endpoint.identification.algorithm = https 23:16:46 policy-pap | ssl.engine.factory.class = null 23:16:46 policy-pap | ssl.key.password = null 23:16:46 policy-pap | ssl.keymanager.algorithm = SunX509 23:16:46 policy-pap | ssl.keystore.certificate.chain = null 23:16:46 policy-pap | ssl.keystore.key = null 23:16:46 policy-pap | ssl.keystore.location = null 23:16:46 policy-pap | ssl.keystore.password = null 23:16:46 policy-pap | ssl.keystore.type = JKS 23:16:46 policy-pap | ssl.protocol = TLSv1.3 23:16:46 policy-pap | ssl.provider = null 23:16:46 policy-pap | ssl.secure.random.implementation = null 23:16:46 policy-pap | ssl.trustmanager.algorithm = PKIX 23:16:46 policy-pap | ssl.truststore.certificates = null 23:16:46 policy-pap | ssl.truststore.location = null 23:16:46 policy-pap | ssl.truststore.password = null 23:16:46 policy-pap | ssl.truststore.type = JKS 23:16:46 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:46 policy-pap | 23:16:46 policy-pap | [2024-03-11T23:14:47.792+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 23:16:46 policy-pap | [2024-03-11T23:14:47.792+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 23:16:46 policy-pap | [2024-03-11T23:14:47.792+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1710198887792 23:16:46 policy-pap | [2024-03-11T23:14:47.793+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap 23:16:46 policy-pap | [2024-03-11T23:14:47.793+00:00|INFO|ServiceManager|main] Policy PAP starting topics 23:16:46 policy-db-migrator | > upgrade 0440-pdpgroup_pdpsubgroup.sql 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpgroup_pdpsubgroup (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPGROUP_PDPSUBGROUP (name, version, parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | > upgrade 0450-pdpgroup.sql 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpgroup (`DESCRIPTION` VARCHAR(255) NULL, PDPGROUPSTATE INT DEFAULT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_PDPGROUP (name, version)) 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | > upgrade 0460-pdppolicystatus.sql 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdppolicystatus (DEPLOY BOOLEAN DEFAULT 0, PDPGROUP VARCHAR(255) DEFAULT NULL, PDPTYPE VARCHAR(255) DEFAULT NULL, STATE INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PRIMARY KEY PK_PDPPOLICYSTATUS (parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | > upgrade 0470-pdp.sql 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdp (HEALTHY INT DEFAULT NULL, MESSAGE VARCHAR(255) DEFAULT NULL, PDPSTATE INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDP (parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | > upgrade 0480-pdpstatistics.sql 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpstatistics (PDPGROUPNAME VARCHAR(120) NULL, PDPSUBGROUPNAME VARCHAR(120) NULL, POLICYDEPLOYCOUNT BIGINT DEFAULT NULL, POLICYDEPLOYFAILCOUNT BIGINT DEFAULT NULL, POLICYDEPLOYSUCCESSCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDFAILCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDSUCCESSCOUNT BIGINT DEFAULT NULL, timeStamp datetime NOT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_PDPSTATISTICS (timeStamp, name, version)) 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | > upgrade 0490-pdpsubgroup_pdp.sql 23:16:46 policy-db-migrator | -------------- 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.494453863Z level=info msg="Executing migration" id="add index builtin_role.org_id" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.495584236Z level=info msg="Migration successfully executed" id="add index builtin_role.org_id" duration=1.129063ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.501130568Z level=info msg="Executing migration" id="add unique index builtin_role_org_id_role_id_role" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.502383023Z level=info msg="Migration successfully executed" id="add unique index builtin_role_org_id_role_id_role" duration=1.251845ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.505942916Z level=info msg="Executing migration" id="Remove unique index role_org_id_uid" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.507039258Z level=info msg="Migration successfully executed" id="Remove unique index role_org_id_uid" duration=1.095922ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.510539219Z level=info msg="Executing migration" id="add unique index role.uid" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.511639632Z level=info msg="Migration successfully executed" id="add unique index role.uid" duration=1.100503ms 23:16:46 policy-pap | [2024-03-11T23:14:47.793+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=d1d543f4-2928-4033-8435-8d4bd1402861, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-heartbeat,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 23:16:46 policy-pap | [2024-03-11T23:14:47.793+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=a774452f-60f0-41d2-bd1a-6ce78860e297, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 23:16:46 policy-pap | [2024-03-11T23:14:47.794+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=60824dfc-2a3f-40df-95ff-b519b19c644c, alive=false, publisher=null]]: starting 23:16:46 policy-pap | [2024-03-11T23:14:47.809+00:00|INFO|ProducerConfig|main] ProducerConfig values: 23:16:46 policy-pap | acks = -1 23:16:46 policy-pap | auto.include.jmx.reporter = true 23:16:46 policy-pap | batch.size = 16384 23:16:46 policy-pap | bootstrap.servers = [kafka:9092] 23:16:46 policy-pap | buffer.memory = 33554432 23:16:46 policy-pap | client.dns.lookup = use_all_dns_ips 23:16:46 policy-pap | client.id = producer-1 23:16:46 policy-pap | compression.type = none 23:16:46 policy-pap | connections.max.idle.ms = 540000 23:16:46 policy-pap | delivery.timeout.ms = 120000 23:16:46 policy-pap | enable.idempotence = true 23:16:46 policy-pap | interceptor.classes = [] 23:16:46 policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer 23:16:46 policy-pap | linger.ms = 0 23:16:46 policy-pap | max.block.ms = 60000 23:16:46 policy-pap | max.in.flight.requests.per.connection = 5 23:16:46 policy-pap | max.request.size = 1048576 23:16:46 policy-pap | metadata.max.age.ms = 300000 23:16:46 policy-pap | metadata.max.idle.ms = 300000 23:16:46 policy-pap | metric.reporters = [] 23:16:46 policy-pap | metrics.num.samples = 2 23:16:46 policy-pap | metrics.recording.level = INFO 23:16:46 policy-pap | metrics.sample.window.ms = 30000 23:16:46 policy-pap | partitioner.adaptive.partitioning.enable = true 23:16:46 policy-pap | partitioner.availability.timeout.ms = 0 23:16:46 policy-pap | partitioner.class = null 23:16:46 policy-pap | partitioner.ignore.keys = false 23:16:46 policy-pap | receive.buffer.bytes = 32768 23:16:46 policy-pap | reconnect.backoff.max.ms = 1000 23:16:46 policy-pap | reconnect.backoff.ms = 50 23:16:46 policy-pap | request.timeout.ms = 30000 23:16:46 policy-pap | retries = 2147483647 23:16:46 policy-pap | retry.backoff.ms = 100 23:16:46 policy-pap | sasl.client.callback.handler.class = null 23:16:46 policy-pap | sasl.jaas.config = null 23:16:46 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:46 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 23:16:46 policy-pap | sasl.kerberos.service.name = null 23:16:46 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:46 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:46 policy-pap | sasl.login.callback.handler.class = null 23:16:46 policy-pap | sasl.login.class = null 23:16:46 policy-pap | sasl.login.connect.timeout.ms = null 23:16:46 policy-pap | sasl.login.read.timeout.ms = null 23:16:46 policy-pap | sasl.login.refresh.buffer.seconds = 300 23:16:46 policy-pap | sasl.login.refresh.min.period.seconds = 60 23:16:46 policy-pap | sasl.login.refresh.window.factor = 0.8 23:16:46 policy-pap | sasl.login.refresh.window.jitter = 0.05 23:16:46 policy-pap | sasl.login.retry.backoff.max.ms = 10000 23:16:46 policy-pap | sasl.login.retry.backoff.ms = 100 23:16:46 policy-pap | sasl.mechanism = GSSAPI 23:16:46 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 23:16:46 policy-pap | sasl.oauthbearer.expected.audience = null 23:16:46 policy-pap | sasl.oauthbearer.expected.issuer = null 23:16:46 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:46 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:46 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:46 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 23:16:46 policy-pap | sasl.oauthbearer.scope.claim.name = scope 23:16:46 policy-pap | sasl.oauthbearer.sub.claim.name = sub 23:16:46 policy-pap | sasl.oauthbearer.token.endpoint.url = null 23:16:46 policy-pap | security.protocol = PLAINTEXT 23:16:46 policy-pap | security.providers = null 23:16:46 policy-pap | send.buffer.bytes = 131072 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.517381469Z level=info msg="Executing migration" id="create seed assignment table" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.518214976Z level=info msg="Migration successfully executed" id="create seed assignment table" duration=832.646µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.523442292Z level=info msg="Executing migration" id="add unique index builtin_role_role_name" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.52530643Z level=info msg="Migration successfully executed" id="add unique index builtin_role_role_name" duration=1.864188ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.529227519Z level=info msg="Executing migration" id="add column hidden to role table" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.538477377Z level=info msg="Migration successfully executed" id="add column hidden to role table" duration=9.249028ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.54302895Z level=info msg="Executing migration" id="permission kind migration" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.548650294Z level=info msg="Migration successfully executed" id="permission kind migration" duration=5.620584ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.55535951Z level=info msg="Executing migration" id="permission attribute migration" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.563585838Z level=info msg="Migration successfully executed" id="permission attribute migration" duration=8.225587ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.567495387Z level=info msg="Executing migration" id="permission identifier migration" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.575531121Z level=info msg="Migration successfully executed" id="permission identifier migration" duration=8.032904ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.580043642Z level=info msg="Executing migration" id="add permission identifier index" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.58091652Z level=info msg="Migration successfully executed" id="add permission identifier index" duration=872.738µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.58436665Z level=info msg="Executing migration" id="add permission action scope role_id index" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.585249718Z level=info msg="Migration successfully executed" id="add permission action scope role_id index" duration=883.498µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.590087297Z level=info msg="Executing migration" id="remove permission role_id action scope index" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.591969564Z level=info msg="Migration successfully executed" id="remove permission role_id action scope index" duration=1.878677ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.598977167Z level=info msg="Executing migration" id="create query_history table v1" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.599984097Z level=info msg="Migration successfully executed" id="create query_history table v1" duration=1.00608ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.603645922Z level=info msg="Executing migration" id="add index query_history.org_id-created_by-datasource_uid" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.605865857Z level=info msg="Migration successfully executed" id="add index query_history.org_id-created_by-datasource_uid" duration=2.219675ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.610239806Z level=info msg="Executing migration" id="alter table query_history alter column created_by type to bigint" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.610479261Z level=info msg="Migration successfully executed" id="alter table query_history alter column created_by type to bigint" duration=238.495µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.615492453Z level=info msg="Executing migration" id="rbac disabled migrator" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.615589435Z level=info msg="Migration successfully executed" id="rbac disabled migrator" duration=95.152µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.619239799Z level=info msg="Executing migration" id="teams permissions migration" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.619930653Z level=info msg="Migration successfully executed" id="teams permissions migration" duration=690.684µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.62372906Z level=info msg="Executing migration" id="dashboard permissions" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.624913024Z level=info msg="Migration successfully executed" id="dashboard permissions" duration=1.185784ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.630559039Z level=info msg="Executing migration" id="dashboard permissions uid scopes" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.631392526Z level=info msg="Migration successfully executed" id="dashboard permissions uid scopes" duration=830.316µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.64686805Z level=info msg="Executing migration" id="drop managed folder create actions" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.647382571Z level=info msg="Migration successfully executed" id="drop managed folder create actions" duration=510.23µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.651938944Z level=info msg="Executing migration" id="alerting notification permissions" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.652868882Z level=info msg="Migration successfully executed" id="alerting notification permissions" duration=929.758µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.657063638Z level=info msg="Executing migration" id="create query_history_star table v1" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.658043277Z level=info msg="Migration successfully executed" id="create query_history_star table v1" duration=979.129µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.663434197Z level=info msg="Executing migration" id="add index query_history.user_id-query_uid" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.664806864Z level=info msg="Migration successfully executed" id="add index query_history.user_id-query_uid" duration=1.371367ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.668655644Z level=info msg="Executing migration" id="add column org_id in query_history_star" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.677375281Z level=info msg="Migration successfully executed" id="add column org_id in query_history_star" duration=8.718908ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.681081035Z level=info msg="Executing migration" id="alter table query_history_star_mig column user_id type to bigint" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.681259249Z level=info msg="Migration successfully executed" id="alter table query_history_star_mig column user_id type to bigint" duration=177.804µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.685552497Z level=info msg="Executing migration" id="create correlation table v1" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.68765182Z level=info msg="Migration successfully executed" id="create correlation table v1" duration=2.099863ms 23:16:46 policy-pap | socket.connection.setup.timeout.max.ms = 30000 23:16:46 policy-pap | socket.connection.setup.timeout.ms = 10000 23:16:46 policy-pap | ssl.cipher.suites = null 23:16:46 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:46 policy-pap | ssl.endpoint.identification.algorithm = https 23:16:46 policy-pap | ssl.engine.factory.class = null 23:16:46 policy-pap | ssl.key.password = null 23:16:46 policy-pap | ssl.keymanager.algorithm = SunX509 23:16:46 policy-pap | ssl.keystore.certificate.chain = null 23:16:46 policy-pap | ssl.keystore.key = null 23:16:46 policy-pap | ssl.keystore.location = null 23:16:46 policy-pap | ssl.keystore.password = null 23:16:46 policy-pap | ssl.keystore.type = JKS 23:16:46 policy-pap | ssl.protocol = TLSv1.3 23:16:46 policy-pap | ssl.provider = null 23:16:46 policy-pap | ssl.secure.random.implementation = null 23:16:46 policy-pap | ssl.trustmanager.algorithm = PKIX 23:16:46 policy-pap | ssl.truststore.certificates = null 23:16:46 policy-pap | ssl.truststore.location = null 23:16:46 policy-pap | ssl.truststore.password = null 23:16:46 policy-pap | ssl.truststore.type = JKS 23:16:46 policy-pap | transaction.timeout.ms = 60000 23:16:46 policy-pap | transactional.id = null 23:16:46 policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer 23:16:46 policy-pap | 23:16:46 policy-pap | [2024-03-11T23:14:47.821+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. 23:16:46 policy-pap | [2024-03-11T23:14:47.838+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 23:16:46 policy-pap | [2024-03-11T23:14:47.838+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 23:16:46 policy-pap | [2024-03-11T23:14:47.839+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1710198887838 23:16:46 policy-pap | [2024-03-11T23:14:47.839+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=60824dfc-2a3f-40df-95ff-b519b19c644c, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created 23:16:46 policy-pap | [2024-03-11T23:14:47.839+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=12e6bf28-3514-44d9-95f7-7e17cbeddb4e, alive=false, publisher=null]]: starting 23:16:46 policy-pap | [2024-03-11T23:14:47.840+00:00|INFO|ProducerConfig|main] ProducerConfig values: 23:16:46 policy-pap | acks = -1 23:16:46 policy-pap | auto.include.jmx.reporter = true 23:16:46 policy-pap | batch.size = 16384 23:16:46 policy-pap | bootstrap.servers = [kafka:9092] 23:16:46 policy-pap | buffer.memory = 33554432 23:16:46 policy-pap | client.dns.lookup = use_all_dns_ips 23:16:46 policy-pap | client.id = producer-2 23:16:46 policy-pap | compression.type = none 23:16:46 policy-pap | connections.max.idle.ms = 540000 23:16:46 policy-pap | delivery.timeout.ms = 120000 23:16:46 policy-pap | enable.idempotence = true 23:16:46 policy-pap | interceptor.classes = [] 23:16:46 policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer 23:16:46 policy-pap | linger.ms = 0 23:16:46 policy-pap | max.block.ms = 60000 23:16:46 policy-pap | max.in.flight.requests.per.connection = 5 23:16:46 policy-pap | max.request.size = 1048576 23:16:46 policy-pap | metadata.max.age.ms = 300000 23:16:46 policy-pap | metadata.max.idle.ms = 300000 23:16:46 policy-pap | metric.reporters = [] 23:16:46 policy-pap | metrics.num.samples = 2 23:16:46 policy-pap | metrics.recording.level = INFO 23:16:46 policy-pap | metrics.sample.window.ms = 30000 23:16:46 policy-pap | partitioner.adaptive.partitioning.enable = true 23:16:46 policy-pap | partitioner.availability.timeout.ms = 0 23:16:46 policy-pap | partitioner.class = null 23:16:46 policy-pap | partitioner.ignore.keys = false 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.695674862Z level=info msg="Executing migration" id="add index correlations.uid" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.697803525Z level=info msg="Migration successfully executed" id="add index correlations.uid" duration=2.126253ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.70195411Z level=info msg="Executing migration" id="add index correlations.source_uid" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.70394662Z level=info msg="Migration successfully executed" id="add index correlations.source_uid" duration=1.99266ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.70836724Z level=info msg="Executing migration" id="add correlation config column" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.716774591Z level=info msg="Migration successfully executed" id="add correlation config column" duration=8.405941ms 23:16:46 kafka | [2024-03-11 23:14:48,382] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,382] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,382] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,382] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,382] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,382] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,382] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,388] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NonExistentReplica to NewReplica (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,388] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NonExistentReplica to NewReplica (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,388] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NonExistentReplica to NewReplica (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,388] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NonExistentReplica to NewReplica (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,388] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NonExistentReplica to NewReplica (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,388] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NonExistentReplica to NewReplica (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,388] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NonExistentReplica to NewReplica (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,389] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NonExistentReplica to NewReplica (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,389] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NonExistentReplica to NewReplica (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,389] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NonExistentReplica to NewReplica (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,389] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NonExistentReplica to NewReplica (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,389] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NonExistentReplica to NewReplica (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,389] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NonExistentReplica to NewReplica (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,389] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NonExistentReplica to NewReplica (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,389] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NonExistentReplica to NewReplica (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,389] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NonExistentReplica to NewReplica (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,389] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NonExistentReplica to NewReplica (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,389] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NonExistentReplica to NewReplica (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,389] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NonExistentReplica to NewReplica (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,389] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NonExistentReplica to NewReplica (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,389] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NonExistentReplica to NewReplica (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,389] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NonExistentReplica to NewReplica (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,389] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NonExistentReplica to NewReplica (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,389] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NonExistentReplica to NewReplica (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,389] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NonExistentReplica to NewReplica (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,389] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NonExistentReplica to NewReplica (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,389] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NonExistentReplica to NewReplica (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,389] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NonExistentReplica to NewReplica (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,389] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NonExistentReplica to NewReplica (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,389] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NonExistentReplica to NewReplica (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,389] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NonExistentReplica to NewReplica (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,389] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NonExistentReplica to NewReplica (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,389] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NonExistentReplica to NewReplica (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,389] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NonExistentReplica to NewReplica (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,389] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NonExistentReplica to NewReplica (state.change.logger) 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.722466467Z level=info msg="Executing migration" id="drop index IDX_correlation_uid - v1" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.723665091Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_uid - v1" duration=1.198314ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.72754823Z level=info msg="Executing migration" id="drop index IDX_correlation_source_uid - v1" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.728869077Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_source_uid - v1" duration=1.318687ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.734163614Z level=info msg="Executing migration" id="Rename table correlation to correlation_tmp_qwerty - v1" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.757920778Z level=info msg="Migration successfully executed" id="Rename table correlation to correlation_tmp_qwerty - v1" duration=23.758294ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.761387958Z level=info msg="Executing migration" id="create correlation v2" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.762366258Z level=info msg="Migration successfully executed" id="create correlation v2" duration=977.54µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.766310388Z level=info msg="Executing migration" id="create index IDX_correlation_uid - v2" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.767563493Z level=info msg="Migration successfully executed" id="create index IDX_correlation_uid - v2" duration=1.252485ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.773686768Z level=info msg="Executing migration" id="create index IDX_correlation_source_uid - v2" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.775889342Z level=info msg="Migration successfully executed" id="create index IDX_correlation_source_uid - v2" duration=2.202824ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.780220901Z level=info msg="Executing migration" id="create index IDX_correlation_org_id - v2" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.781902255Z level=info msg="Migration successfully executed" id="create index IDX_correlation_org_id - v2" duration=1.682004ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.785778324Z level=info msg="Executing migration" id="copy correlation v1 to v2" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.786190142Z level=info msg="Migration successfully executed" id="copy correlation v1 to v2" duration=412.958µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.791450339Z level=info msg="Executing migration" id="drop correlation_tmp_qwerty" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.792388228Z level=info msg="Migration successfully executed" id="drop correlation_tmp_qwerty" duration=937.299µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.798334459Z level=info msg="Executing migration" id="add provisioning column" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.809599808Z level=info msg="Migration successfully executed" id="add provisioning column" duration=11.265659ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.813210191Z level=info msg="Executing migration" id="create entity_events table" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.813935126Z level=info msg="Migration successfully executed" id="create entity_events table" duration=724.005µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.818636551Z level=info msg="Executing migration" id="create dashboard public config v1" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.819784095Z level=info msg="Migration successfully executed" id="create dashboard public config v1" duration=1.147224ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.824758176Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v1" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.825362638Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index UQE_dashboard_public_config_uid - v1" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.830778489Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.831393191Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.836348752Z level=info msg="Executing migration" id="Drop old dashboard public config table" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.837899263Z level=info msg="Migration successfully executed" id="Drop old dashboard public config table" duration=1.549371ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.841914665Z level=info msg="Executing migration" id="recreate dashboard public config v1" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.84366102Z level=info msg="Migration successfully executed" id="recreate dashboard public config v1" duration=1.745255ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.850204303Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v1" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.851468749Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v1" duration=1.264776ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.85549182Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.856848408Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" duration=1.355668ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.863082015Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v2" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.864253949Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_public_config_uid - v2" duration=1.171205ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.871460635Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.873271672Z level=info msg="Migration successfully executed" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.812107ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.877540028Z level=info msg="Executing migration" id="Drop public config table" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.878783064Z level=info msg="Migration successfully executed" id="Drop public config table" duration=1.243746ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.882816686Z level=info msg="Executing migration" id="Recreate dashboard public config v2" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.884067212Z level=info msg="Migration successfully executed" id="Recreate dashboard public config v2" duration=1.249766ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.889695916Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v2" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.890972321Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v2" duration=1.275745ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.894995724Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.896277439Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.281505ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.901490906Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_access_token - v2" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.90274496Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_access_token - v2" duration=1.253015ms 23:16:46 kafka | [2024-03-11 23:14:48,389] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NonExistentReplica to NewReplica (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,389] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NonExistentReplica to NewReplica (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,389] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NonExistentReplica to NewReplica (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,389] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NonExistentReplica to NewReplica (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,389] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NonExistentReplica to NewReplica (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,390] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NonExistentReplica to NewReplica (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,390] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NonExistentReplica to NewReplica (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,390] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NonExistentReplica to NewReplica (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,390] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NonExistentReplica to NewReplica (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,390] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NonExistentReplica to NewReplica (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,390] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NonExistentReplica to NewReplica (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,390] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NonExistentReplica to NewReplica (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,390] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NonExistentReplica to NewReplica (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,390] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NonExistentReplica to NewReplica (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,390] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NonExistentReplica to NewReplica (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,390] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NonExistentReplica to NewReplica (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,390] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,558] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,558] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,558] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,558] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,558] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,558] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,559] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,559] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,559] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,559] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,559] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,559] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,559] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,559] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpsubgroup_pdp (pdpParentKeyName VARCHAR(120) NOT NULL, pdpParentKeyVersion VARCHAR(15) NOT NULL, pdpParentLocalName VARCHAR(120) NOT NULL, pdpLocalName VARCHAR(120) NOT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPSUBGROUP_PDP (pdpParentKeyName, pdpParentKeyVersion, pdpParentLocalName, pdpLocalName, parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | > upgrade 0500-pdpsubgroup.sql 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpsubgroup (CURRENTINSTANCECOUNT INT DEFAULT NULL, DESIREDINSTANCECOUNT INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPSUBGROUP (parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | > upgrade 0510-toscacapabilityassignment.sql 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignment (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENT(name, version)) 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | > upgrade 0520-toscacapabilityassignments.sql 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignments (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENTS (name, version)) 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | > upgrade 0530-toscacapabilityassignments_toscacapabilityassignment.sql 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignments_toscacapabilityassignment (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENTS_TOSCACAPABILITYASSIGNMENT (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | > upgrade 0540-toscacapabilitytype.sql 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPE (name, version)) 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | > upgrade 0550-toscacapabilitytypes.sql 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPES (name, version)) 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | > upgrade 0560-toscacapabilitytypes_toscacapabilitytype.sql 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytypes_toscacapabilitytype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPES_TOSCACAPABILITYTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-pap | receive.buffer.bytes = 32768 23:16:46 policy-pap | reconnect.backoff.max.ms = 1000 23:16:46 policy-pap | reconnect.backoff.ms = 50 23:16:46 policy-pap | request.timeout.ms = 30000 23:16:46 policy-pap | retries = 2147483647 23:16:46 policy-pap | retry.backoff.ms = 100 23:16:46 policy-pap | sasl.client.callback.handler.class = null 23:16:46 policy-pap | sasl.jaas.config = null 23:16:46 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:46 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 23:16:46 policy-pap | sasl.kerberos.service.name = null 23:16:46 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:46 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:46 policy-pap | sasl.login.callback.handler.class = null 23:16:46 policy-pap | sasl.login.class = null 23:16:46 policy-pap | sasl.login.connect.timeout.ms = null 23:16:46 policy-pap | sasl.login.read.timeout.ms = null 23:16:46 policy-pap | sasl.login.refresh.buffer.seconds = 300 23:16:46 policy-pap | sasl.login.refresh.min.period.seconds = 60 23:16:46 policy-pap | sasl.login.refresh.window.factor = 0.8 23:16:46 policy-pap | sasl.login.refresh.window.jitter = 0.05 23:16:46 policy-pap | sasl.login.retry.backoff.max.ms = 10000 23:16:46 policy-pap | sasl.login.retry.backoff.ms = 100 23:16:46 policy-pap | sasl.mechanism = GSSAPI 23:16:46 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 23:16:46 policy-pap | sasl.oauthbearer.expected.audience = null 23:16:46 policy-pap | sasl.oauthbearer.expected.issuer = null 23:16:46 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:46 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:46 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:46 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 23:16:46 policy-pap | sasl.oauthbearer.scope.claim.name = scope 23:16:46 policy-pap | sasl.oauthbearer.sub.claim.name = sub 23:16:46 policy-pap | sasl.oauthbearer.token.endpoint.url = null 23:16:46 policy-pap | security.protocol = PLAINTEXT 23:16:46 policy-pap | security.providers = null 23:16:46 policy-pap | send.buffer.bytes = 131072 23:16:46 policy-pap | socket.connection.setup.timeout.max.ms = 30000 23:16:46 policy-pap | socket.connection.setup.timeout.ms = 10000 23:16:46 policy-pap | ssl.cipher.suites = null 23:16:46 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:46 policy-pap | ssl.endpoint.identification.algorithm = https 23:16:46 policy-pap | ssl.engine.factory.class = null 23:16:46 policy-pap | ssl.key.password = null 23:16:46 policy-pap | ssl.keymanager.algorithm = SunX509 23:16:46 policy-pap | ssl.keystore.certificate.chain = null 23:16:46 policy-pap | ssl.keystore.key = null 23:16:46 policy-pap | ssl.keystore.location = null 23:16:46 policy-pap | ssl.keystore.password = null 23:16:46 policy-pap | ssl.keystore.type = JKS 23:16:46 policy-pap | ssl.protocol = TLSv1.3 23:16:46 policy-pap | ssl.provider = null 23:16:46 policy-pap | ssl.secure.random.implementation = null 23:16:46 policy-pap | ssl.trustmanager.algorithm = PKIX 23:16:46 policy-pap | ssl.truststore.certificates = null 23:16:46 policy-pap | ssl.truststore.location = null 23:16:46 policy-pap | ssl.truststore.password = null 23:16:46 policy-pap | ssl.truststore.type = JKS 23:16:46 policy-pap | transaction.timeout.ms = 60000 23:16:46 policy-pap | transactional.id = null 23:16:46 policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer 23:16:46 policy-pap | 23:16:46 policy-pap | [2024-03-11T23:14:47.841+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-2] Instantiated an idempotent producer. 23:16:46 policy-pap | [2024-03-11T23:14:47.843+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 23:16:46 policy-pap | [2024-03-11T23:14:47.843+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 23:16:46 policy-pap | [2024-03-11T23:14:47.843+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1710198887843 23:16:46 policy-pap | [2024-03-11T23:14:47.844+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=12e6bf28-3514-44d9-95f7-7e17cbeddb4e, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created 23:16:46 policy-pap | [2024-03-11T23:14:47.844+00:00|INFO|ServiceManager|main] Policy PAP starting PAP Activator 23:16:46 policy-pap | [2024-03-11T23:14:47.844+00:00|INFO|ServiceManager|main] Policy PAP starting PDP publisher 23:16:46 policy-pap | [2024-03-11T23:14:47.847+00:00|INFO|ServiceManager|main] Policy PAP starting Policy Notification publisher 23:16:46 policy-pap | [2024-03-11T23:14:47.848+00:00|INFO|ServiceManager|main] Policy PAP starting PDP update timers 23:16:46 policy-pap | [2024-03-11T23:14:47.849+00:00|INFO|ServiceManager|main] Policy PAP starting PDP state-change timers 23:16:46 policy-pap | [2024-03-11T23:14:47.850+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification lock 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | > upgrade 0570-toscadatatype.sql 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCADATATYPE (name, version)) 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | > upgrade 0580-toscadatatypes.sql 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCADATATYPES (name, version)) 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | > upgrade 0590-toscadatatypes_toscadatatype.sql 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatypes_toscadatatype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCADATATYPES_TOSCADATATYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | > upgrade 0600-toscanodetemplate.sql 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplate (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, capabilitiesName VARCHAR(120) NULL, capabilitiesVersion VARCHAR(20) NULL, requirementsVersion VARCHAR(20) NULL, requirementsName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCANODETEMPLATE (name, version)) 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | > upgrade 0610-toscanodetemplates.sql 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplates (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCANODETEMPLATES (name, version)) 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | > upgrade 0620-toscanodetemplates_toscanodetemplate.sql 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplates_toscanodetemplate (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCANODETEMPLATES_TOSCANODETEMPLATE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | > upgrade 0630-toscanodetype.sql 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, requirementsVersion VARCHAR(20) NULL, requirementsName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCANODETYPE (name, version)) 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | > upgrade 0640-toscanodetypes.sql 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCANODETYPES (name, version)) 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | > upgrade 0650-toscanodetypes_toscanodetype.sql 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetypes_toscanodetype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCANODETYPES_TOSCANODETYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | > upgrade 0660-toscaparameter.sql 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-pap | [2024-03-11T23:14:47.850+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification requests 23:16:46 policy-pap | [2024-03-11T23:14:47.851+00:00|INFO|ServiceManager|main] Policy PAP starting PDP expiration timer 23:16:46 policy-pap | [2024-03-11T23:14:47.850+00:00|INFO|TimerManager|Thread-9] timer manager update started 23:16:46 policy-pap | [2024-03-11T23:14:47.852+00:00|INFO|TimerManager|Thread-10] timer manager state-change started 23:16:46 policy-pap | [2024-03-11T23:14:47.852+00:00|INFO|ServiceManager|main] Policy PAP started 23:16:46 policy-pap | [2024-03-11T23:14:47.854+00:00|INFO|PolicyPapApplication|main] Started PolicyPapApplication in 11.182 seconds (process running for 11.851) 23:16:46 policy-pap | [2024-03-11T23:14:48.298+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-a774452f-60f0-41d2-bd1a-6ce78860e297-3, groupId=a774452f-60f0-41d2-bd1a-6ce78860e297] Error while fetching metadata with correlation id 2 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} 23:16:46 policy-pap | [2024-03-11T23:14:48.299+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-a774452f-60f0-41d2-bd1a-6ce78860e297-3, groupId=a774452f-60f0-41d2-bd1a-6ce78860e297] Cluster ID: OdmwtGb8RBC2kzsuX5kwmQ 23:16:46 policy-pap | [2024-03-11T23:14:48.299+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: OdmwtGb8RBC2kzsuX5kwmQ 23:16:46 policy-pap | [2024-03-11T23:14:48.299+00:00|INFO|Metadata|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] Cluster ID: OdmwtGb8RBC2kzsuX5kwmQ 23:16:46 policy-pap | [2024-03-11T23:14:48.365+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 2 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:46 policy-pap | [2024-03-11T23:14:48.365+00:00|INFO|Metadata|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Cluster ID: OdmwtGb8RBC2kzsuX5kwmQ 23:16:46 policy-pap | [2024-03-11T23:14:48.419+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 1 with epoch 0 23:16:46 policy-pap | [2024-03-11T23:14:48.420+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] ProducerId set to 0 with epoch 0 23:16:46 policy-pap | [2024-03-11T23:14:48.446+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-a774452f-60f0-41d2-bd1a-6ce78860e297-3, groupId=a774452f-60f0-41d2-bd1a-6ce78860e297] Error while fetching metadata with correlation id 4 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:46 policy-pap | [2024-03-11T23:14:48.495+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 4 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:46 policy-pap | [2024-03-11T23:14:48.555+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-a774452f-60f0-41d2-bd1a-6ce78860e297-3, groupId=a774452f-60f0-41d2-bd1a-6ce78860e297] Error while fetching metadata with correlation id 6 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:46 policy-pap | [2024-03-11T23:14:48.613+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 6 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaparameter (VALUE VARCHAR(255) NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPARAMETER (parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | > upgrade 0670-toscapolicies.sql 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicies (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICIES (name, version)) 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | > upgrade 0680-toscapolicies_toscapolicy.sql 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicies_toscapolicy (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPOLICIES_TOSCAPOLICY (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | > upgrade 0690-toscapolicy.sql 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicy (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCAPOLICY (name, version)) 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | > upgrade 0700-toscapolicytype.sql 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICYTYPE (name, version)) 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | > upgrade 0710-toscapolicytypes.sql 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICYTYPES (name, version)) 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | > upgrade 0720-toscapolicytypes_toscapolicytype.sql 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytypes_toscapolicytype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPOLICYTYPES_TOSCAPOLICYTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | > upgrade 0730-toscaproperty.sql 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaproperty (DEFAULTVALUE VARCHAR(255) DEFAULT NULL, `DESCRIPTION` VARCHAR(255) DEFAULT NULL, ENTRYSCHEMA LONGBLOB DEFAULT NULL, REQUIRED BOOLEAN DEFAULT 0, STATUS INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PRIMARY KEY PK_TOSCAPROPERTY (parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | > upgrade 0740-toscarelationshiptype.sql 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPE (name, version)) 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | > upgrade 0750-toscarelationshiptypes.sql 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPES (name, version)) 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | > upgrade 0760-toscarelationshiptypes_toscarelationshiptype.sql 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptypes_toscarelationshiptype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPES_TOSCARELATIONSHIPTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | > upgrade 0770-toscarequirement.sql 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirement (CAPABILITY VARCHAR(255) NULL, `DESCRIPTION` VARCHAR(255) NULL, NODE VARCHAR(255) NULL, RELATIONSHIP VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCAREQUIREMENT (name, version)) 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | > upgrade 0780-toscarequirements.sql 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirements (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAREQUIREMENTS (name, version)) 23:16:46 kafka | [2024-03-11 23:14:48,559] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,559] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,559] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,559] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,559] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,560] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,560] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,560] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,560] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,560] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,560] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,560] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,561] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,561] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,561] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,561] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,562] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,562] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,562] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,562] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,562] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,562] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,562] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,562] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,562] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | > upgrade 0790-toscarequirements_toscarequirement.sql 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirements_toscarequirement (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAREQUIREMENTS_TOSCAREQUIREMENT (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | > upgrade 0800-toscaservicetemplate.sql 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaservicetemplate (`DESCRIPTION` VARCHAR(255) NULL, TOSCADEFINITIONSVERSION VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, capabilityTypesVersion VARCHAR(20) NULL, capabilityTypesName VARCHAR(120) NULL, dataTypesName VARCHAR(120) NULL, dataTypesVersion VARCHAR(20) NULL, nodeTypesVersion VARCHAR(20) NULL, nodeTypesName VARCHAR(120) NULL, policyTypesName VARCHAR(120) NULL, policyTypesVersion VARCHAR(20) NULL, relationshipTypesVersion VARCHAR(20) NULL, relationshipTypesName VARCHAR(120) NULL, topologyTemplateLocalName VARCHAR(120) NULL, topologyTemplateParentKeyName VARCHAR(120) NULL, topologyTemplateParentKeyVersion VARCHAR(15) NULL, topologyTemplateParentLocalName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCASERVICETEMPLATE (name, version)) 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | > upgrade 0810-toscatopologytemplate.sql 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscatopologytemplate (`description` VARCHAR(255) NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, nodeTemplatessVersion VARCHAR(20) NULL, nodeTemplatesName VARCHAR(120) NULL, policyVersion VARCHAR(20) NULL, policyName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCATOPOLOGYTEMPLATE (parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | 23:16:46 kafka | [2024-03-11 23:14:48,562] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,562] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,563] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,563] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,563] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,563] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,563] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,563] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,563] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,563] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,563] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,563] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,567] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-13 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,568] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-46 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,568] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-9 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,568] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-42 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,568] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-21 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,568] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-17 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,569] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-30 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,569] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-26 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,569] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-5 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,569] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-38 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,569] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-1 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,570] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-34 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,570] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-16 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,570] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-45 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,570] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-12 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,570] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-41 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,570] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-24 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,571] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-20 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,571] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-49 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,571] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-0 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,571] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-29 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,571] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-25 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,571] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-8 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,572] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-37 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,572] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-4 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,572] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-33 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,572] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-15 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,572] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-48 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,573] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-11 (state.change.logger) 23:16:46 policy-db-migrator | > upgrade 0820-toscatrigger.sql 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscatrigger (ACTION VARCHAR(255) NULL, toscaCondition LONGBLOB DEFAULT NULL, toscaConstraint LONGBLOB DEFAULT NULL, `DESCRIPTION` VARCHAR(255) NULL, EVALUATIONS INT DEFAULT NULL, EVENTTYPE VARCHAR(255) NULL, METHOD VARCHAR(255) NULL, `PERIOD` LONGBLOB DEFAULT NULL, SCHEDULE LONGBLOB DEFAULT NULL, TARGETFILTER LONGBLOB DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_TOSCATRIGGER (parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | > upgrade 0830-FK_ToscaNodeTemplate_capabilitiesName.sql 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | CREATE INDEX FK_ToscaNodeTemplate_capabilitiesName ON toscanodetemplate(capabilitiesName, capabilitiesVersion) 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | > upgrade 0840-FK_ToscaNodeTemplate_requirementsName.sql 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | CREATE INDEX FK_ToscaNodeTemplate_requirementsName ON toscanodetemplate(requirementsName, requirementsVersion) 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | > upgrade 0850-FK_ToscaNodeType_requirementsName.sql 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | CREATE INDEX FK_ToscaNodeType_requirementsName ON toscanodetype(requirementsName, requirementsVersion) 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | > upgrade 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_capabilityTypesName ON toscaservicetemplate(capabilityTypesName, capabilityTypesVersion) 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | > upgrade 0870-FK_ToscaServiceTemplate_dataTypesName.sql 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_dataTypesName ON toscaservicetemplate(dataTypesName, dataTypesVersion) 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | > upgrade 0880-FK_ToscaServiceTemplate_nodeTypesName.sql 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_nodeTypesName ON toscaservicetemplate(nodeTypesName, nodeTypesVersion) 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | > upgrade 0890-FK_ToscaServiceTemplate_policyTypesName.sql 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_policyTypesName ON toscaservicetemplate(policyTypesName, policyTypesVersion) 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | > upgrade 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_relationshipTypesName ON toscaservicetemplate(relationshipTypesName, relationshipTypesVersion) 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | > upgrade 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | CREATE INDEX FK_ToscaTopologyTemplate_nodeTemplatesName ON toscatopologytemplate(nodeTemplatesName, nodeTemplatessVersion) 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | > upgrade 0920-FK_ToscaTopologyTemplate_policyName.sql 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | CREATE INDEX FK_ToscaTopologyTemplate_policyName ON toscatopologytemplate(policyName, policyVersion) 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | > upgrade 0940-PdpPolicyStatus_PdpGroup.sql 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | CREATE INDEX PdpPolicyStatus_PdpGroup ON pdppolicystatus(PDPGROUP) 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | > upgrade 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql 23:16:46 kafka | [2024-03-11 23:14:48,573] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-44 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,573] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-23 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,573] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-19 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,573] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-32 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,573] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-28 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,574] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-7 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,574] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-40 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,574] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-3 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,574] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-36 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,574] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-47 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,574] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-14 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,575] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-43 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,575] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-10 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,575] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-22 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,575] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-18 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,575] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-31 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,576] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-27 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,576] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-39 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,576] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-6 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,576] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-35 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,576] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition policy-pdp-pap-0 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,576] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-2 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,578] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 51 become-leader and 0 become-follower partitions (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,582] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 51 partitions (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,584] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NewReplica to OnlineReplica (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,584] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NewReplica to OnlineReplica (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,584] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NewReplica to OnlineReplica (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,584] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NewReplica to OnlineReplica (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,585] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NewReplica to OnlineReplica (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,585] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NewReplica to OnlineReplica (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,585] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NewReplica to OnlineReplica (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,585] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NewReplica to OnlineReplica (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,585] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NewReplica to OnlineReplica (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,585] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NewReplica to OnlineReplica (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,585] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NewReplica to OnlineReplica (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,585] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NewReplica to OnlineReplica (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,585] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NewReplica to OnlineReplica (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,585] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NewReplica to OnlineReplica (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,585] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NewReplica to OnlineReplica (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,586] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NewReplica to OnlineReplica (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,586] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NewReplica to OnlineReplica (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,586] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NewReplica to OnlineReplica (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,586] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NewReplica to OnlineReplica (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,586] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NewReplica to OnlineReplica (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,586] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NewReplica to OnlineReplica (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,586] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NewReplica to OnlineReplica (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,586] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NewReplica to OnlineReplica (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,586] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NewReplica to OnlineReplica (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,586] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NewReplica to OnlineReplica (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,586] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NewReplica to OnlineReplica (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,586] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NewReplica to OnlineReplica (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,587] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NewReplica to OnlineReplica (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,587] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NewReplica to OnlineReplica (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,587] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NewReplica to OnlineReplica (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,587] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NewReplica to OnlineReplica (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,587] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NewReplica to OnlineReplica (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,587] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NewReplica to OnlineReplica (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,587] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NewReplica to OnlineReplica (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,587] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NewReplica to OnlineReplica (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,587] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NewReplica to OnlineReplica (state.change.logger) 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.90710398Z level=info msg="Executing migration" id="Rename table dashboard_public_config to dashboard_public - v2" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.932124238Z level=info msg="Migration successfully executed" id="Rename table dashboard_public_config to dashboard_public - v2" duration=25.019908ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.93669067Z level=info msg="Executing migration" id="add annotations_enabled column" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.945405288Z level=info msg="Migration successfully executed" id="add annotations_enabled column" duration=8.713768ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.950061112Z level=info msg="Executing migration" id="add time_selection_enabled column" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.956301349Z level=info msg="Migration successfully executed" id="add time_selection_enabled column" duration=6.239037ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.961089306Z level=info msg="Executing migration" id="delete orphaned public dashboards" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.961423163Z level=info msg="Migration successfully executed" id="delete orphaned public dashboards" duration=332.447µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.965516827Z level=info msg="Executing migration" id="add share column" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.977692124Z level=info msg="Migration successfully executed" id="add share column" duration=12.173637ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.982243777Z level=info msg="Executing migration" id="backfill empty share column fields with default of public" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.982519652Z level=info msg="Migration successfully executed" id="backfill empty share column fields with default of public" duration=275.985µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.987267259Z level=info msg="Executing migration" id="create file table" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:17.989154647Z level=info msg="Migration successfully executed" id="create file table" duration=1.886278ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:18.018542125Z level=info msg="Executing migration" id="file table idx: path natural pk" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:18.020809911Z level=info msg="Migration successfully executed" id="file table idx: path natural pk" duration=2.271776ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:18.025307402Z level=info msg="Executing migration" id="file table idx: parent_folder_path_hash fast folder retrieval" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:18.026493927Z level=info msg="Migration successfully executed" id="file table idx: parent_folder_path_hash fast folder retrieval" duration=1.186645ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:18.031491588Z level=info msg="Executing migration" id="create file_meta table" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:18.032500249Z level=info msg="Migration successfully executed" id="create file_meta table" duration=1.007622ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:18.036410108Z level=info msg="Executing migration" id="file table idx: path key" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:18.037656094Z level=info msg="Migration successfully executed" id="file table idx: path key" duration=1.245676ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:18.046105606Z level=info msg="Executing migration" id="set path collation in file table" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:18.04635502Z level=info msg="Migration successfully executed" id="set path collation in file table" duration=249.535µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:18.051559605Z level=info msg="Executing migration" id="migrate contents column to mediumblob for MySQL" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:18.051794491Z level=info msg="Migration successfully executed" id="migrate contents column to mediumblob for MySQL" duration=234.896µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:18.057447256Z level=info msg="Executing migration" id="managed permissions migration" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:18.058447566Z level=info msg="Migration successfully executed" id="managed permissions migration" duration=999.49µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:18.063650952Z level=info msg="Executing migration" id="managed folder permissions alert actions migration" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:18.064006979Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions migration" duration=356.057µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:18.068566302Z level=info msg="Executing migration" id="RBAC action name migrator" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:18.070782926Z level=info msg="Migration successfully executed" id="RBAC action name migrator" duration=2.215874ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:18.074777708Z level=info msg="Executing migration" id="Add UID column to playlist" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:18.084301142Z level=info msg="Migration successfully executed" id="Add UID column to playlist" duration=9.523524ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:18.088696252Z level=info msg="Executing migration" id="Update uid column values in playlist" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:18.088907186Z level=info msg="Migration successfully executed" id="Update uid column values in playlist" duration=210.294µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:18.092264174Z level=info msg="Executing migration" id="Add index for uid in playlist" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:18.093256904Z level=info msg="Migration successfully executed" id="Add index for uid in playlist" duration=993.78µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:18.096700045Z level=info msg="Executing migration" id="update group index for alert rules" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:18.097245015Z level=info msg="Migration successfully executed" id="update group index for alert rules" duration=545.681µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:18.10192677Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated migration" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:18.102308708Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated migration" duration=381.148µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:18.106398891Z level=info msg="Executing migration" id="admin only folder/dashboard permission" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:18.107380751Z level=info msg="Migration successfully executed" id="admin only folder/dashboard permission" duration=980.92µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:18.112959954Z level=info msg="Executing migration" id="add action column to seed_assignment" 23:16:46 kafka | [2024-03-11 23:14:48,587] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NewReplica to OnlineReplica (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,587] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NewReplica to OnlineReplica (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,587] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NewReplica to OnlineReplica (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,587] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NewReplica to OnlineReplica (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,587] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NewReplica to OnlineReplica (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,587] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NewReplica to OnlineReplica (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,587] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NewReplica to OnlineReplica (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,588] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NewReplica to OnlineReplica (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,588] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NewReplica to OnlineReplica (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,588] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NewReplica to OnlineReplica (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,588] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NewReplica to OnlineReplica (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,588] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NewReplica to OnlineReplica (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,588] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NewReplica to OnlineReplica (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,588] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NewReplica to OnlineReplica (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,588] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NewReplica to OnlineReplica (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,588] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,594] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 for 51 partitions (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,595] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,595] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,595] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,595] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,595] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,595] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,595] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,595] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,595] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,596] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,596] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,596] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,596] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,596] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,596] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,596] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,596] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,596] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,597] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,597] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,597] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,597] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,597] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,597] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,597] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,597] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,597] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,597] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,598] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,598] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,598] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,598] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,598] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,598] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,598] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,598] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,598] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,599] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,599] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,599] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,599] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,599] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,599] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,599] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,599] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,599] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,599] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,599] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,603] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,603] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,603] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,645] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-3 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,645] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-18 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,645] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-41 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,645] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-10 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,645] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-33 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,645] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-48 (state.change.logger) 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | CREATE INDEX TscaServiceTemplatetopologyTemplateParentLocalName ON toscaservicetemplate(topologyTemplateParentLocalName, topologyTemplateLocalName, topologyTemplateParentKeyVersion, topologyTemplateParentKeyName) 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | > upgrade 0960-FK_ToscaNodeTemplate_capabilitiesName.sql 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | ALTER TABLE toscanodetemplate ADD CONSTRAINT FK_ToscaNodeTemplate_capabilitiesName FOREIGN KEY (capabilitiesName, capabilitiesVersion) REFERENCES toscacapabilityassignments (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | > upgrade 0970-FK_ToscaNodeTemplate_requirementsName.sql 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | ALTER TABLE toscanodetemplate ADD CONSTRAINT FK_ToscaNodeTemplate_requirementsName FOREIGN KEY (requirementsName, requirementsVersion) REFERENCES toscarequirements (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | > upgrade 0980-FK_ToscaNodeType_requirementsName.sql 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | ALTER TABLE toscanodetype ADD CONSTRAINT FK_ToscaNodeType_requirementsName FOREIGN KEY (requirementsName, requirementsVersion) REFERENCES toscarequirements (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | > upgrade 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_capabilityTypesName FOREIGN KEY (capabilityTypesName, capabilityTypesVersion) REFERENCES toscacapabilitytypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | > upgrade 1000-FK_ToscaServiceTemplate_dataTypesName.sql 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_dataTypesName FOREIGN KEY (dataTypesName, dataTypesVersion) REFERENCES toscadatatypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | > upgrade 1010-FK_ToscaServiceTemplate_nodeTypesName.sql 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_nodeTypesName FOREIGN KEY (nodeTypesName, nodeTypesVersion) REFERENCES toscanodetypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | > upgrade 1020-FK_ToscaServiceTemplate_policyTypesName.sql 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_policyTypesName FOREIGN KEY (policyTypesName, policyTypesVersion) REFERENCES toscapolicytypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | > upgrade 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_relationshipTypesName FOREIGN KEY (relationshipTypesName, relationshipTypesVersion) REFERENCES toscarelationshiptypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:46 policy-pap | [2024-03-11T23:14:48.673+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-a774452f-60f0-41d2-bd1a-6ce78860e297-3, groupId=a774452f-60f0-41d2-bd1a-6ce78860e297] Error while fetching metadata with correlation id 8 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:46 policy-pap | [2024-03-11T23:14:48.721+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 8 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:46 policy-pap | [2024-03-11T23:14:48.789+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-a774452f-60f0-41d2-bd1a-6ce78860e297-3, groupId=a774452f-60f0-41d2-bd1a-6ce78860e297] Error while fetching metadata with correlation id 10 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:46 policy-pap | [2024-03-11T23:14:48.834+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 10 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:46 policy-pap | [2024-03-11T23:14:48.899+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-a774452f-60f0-41d2-bd1a-6ce78860e297-3, groupId=a774452f-60f0-41d2-bd1a-6ce78860e297] Error while fetching metadata with correlation id 12 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:46 policy-pap | [2024-03-11T23:14:48.950+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 12 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:46 policy-pap | [2024-03-11T23:14:49.016+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-a774452f-60f0-41d2-bd1a-6ce78860e297-3, groupId=a774452f-60f0-41d2-bd1a-6ce78860e297] Error while fetching metadata with correlation id 14 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:46 policy-pap | [2024-03-11T23:14:49.064+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 14 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:46 policy-pap | [2024-03-11T23:14:49.130+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-a774452f-60f0-41d2-bd1a-6ce78860e297-3, groupId=a774452f-60f0-41d2-bd1a-6ce78860e297] Error while fetching metadata with correlation id 16 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:46 policy-pap | [2024-03-11T23:14:49.170+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 16 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:46 policy-pap | [2024-03-11T23:14:49.236+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-a774452f-60f0-41d2-bd1a-6ce78860e297-3, groupId=a774452f-60f0-41d2-bd1a-6ce78860e297] Error while fetching metadata with correlation id 18 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:46 policy-pap | [2024-03-11T23:14:49.283+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 18 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:46 policy-pap | [2024-03-11T23:14:49.346+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-a774452f-60f0-41d2-bd1a-6ce78860e297-3, groupId=a774452f-60f0-41d2-bd1a-6ce78860e297] Error while fetching metadata with correlation id 20 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:46 policy-pap | [2024-03-11T23:14:49.396+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 20 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:46 policy-pap | [2024-03-11T23:14:49.473+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-a774452f-60f0-41d2-bd1a-6ce78860e297-3, groupId=a774452f-60f0-41d2-bd1a-6ce78860e297] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) 23:16:46 policy-pap | [2024-03-11T23:14:49.481+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-a774452f-60f0-41d2-bd1a-6ce78860e297-3, groupId=a774452f-60f0-41d2-bd1a-6ce78860e297] (Re-)joining group 23:16:46 policy-pap | [2024-03-11T23:14:49.502+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) 23:16:46 policy-pap | [2024-03-11T23:14:49.509+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group 23:16:46 policy-pap | [2024-03-11T23:14:49.509+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-a774452f-60f0-41d2-bd1a-6ce78860e297-3, groupId=a774452f-60f0-41d2-bd1a-6ce78860e297] Request joining group due to: need to re-join with the given member-id: consumer-a774452f-60f0-41d2-bd1a-6ce78860e297-3-cf0c0076-0825-4c64-9f0a-60e35d58c444 23:16:46 policy-pap | [2024-03-11T23:14:49.510+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-a774452f-60f0-41d2-bd1a-6ce78860e297-3, groupId=a774452f-60f0-41d2-bd1a-6ce78860e297] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) 23:16:46 policy-pap | [2024-03-11T23:14:49.510+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-a774452f-60f0-41d2-bd1a-6ce78860e297-3, groupId=a774452f-60f0-41d2-bd1a-6ce78860e297] (Re-)joining group 23:16:46 policy-pap | [2024-03-11T23:14:49.517+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: need to re-join with the given member-id: consumer-policy-pap-4-70d3bb56-183c-4198-8352-709b4ff83c81 23:16:46 policy-pap | [2024-03-11T23:14:49.517+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) 23:16:46 policy-pap | [2024-03-11T23:14:49.517+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group 23:16:46 policy-pap | [2024-03-11T23:14:52.539+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully joined group with generation Generation{generationId=1, memberId='consumer-policy-pap-4-70d3bb56-183c-4198-8352-709b4ff83c81', protocol='range'} 23:16:46 policy-pap | [2024-03-11T23:14:52.541+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-a774452f-60f0-41d2-bd1a-6ce78860e297-3, groupId=a774452f-60f0-41d2-bd1a-6ce78860e297] Successfully joined group with generation Generation{generationId=1, memberId='consumer-a774452f-60f0-41d2-bd1a-6ce78860e297-3-cf0c0076-0825-4c64-9f0a-60e35d58c444', protocol='range'} 23:16:46 policy-pap | [2024-03-11T23:14:52.550+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-a774452f-60f0-41d2-bd1a-6ce78860e297-3, groupId=a774452f-60f0-41d2-bd1a-6ce78860e297] Finished assignment for group at generation 1: {consumer-a774452f-60f0-41d2-bd1a-6ce78860e297-3-cf0c0076-0825-4c64-9f0a-60e35d58c444=Assignment(partitions=[policy-pdp-pap-0])} 23:16:46 policy-pap | [2024-03-11T23:14:52.551+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Finished assignment for group at generation 1: {consumer-policy-pap-4-70d3bb56-183c-4198-8352-709b4ff83c81=Assignment(partitions=[policy-pdp-pap-0])} 23:16:46 policy-pap | [2024-03-11T23:14:52.585+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully synced group in generation Generation{generationId=1, memberId='consumer-policy-pap-4-70d3bb56-183c-4198-8352-709b4ff83c81', protocol='range'} 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | > upgrade 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | ALTER TABLE toscatopologytemplate ADD CONSTRAINT FK_ToscaTopologyTemplate_nodeTemplatesName FOREIGN KEY (nodeTemplatesName, nodeTemplatessVersion) REFERENCES toscanodetemplates (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | > upgrade 1050-FK_ToscaTopologyTemplate_policyName.sql 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | ALTER TABLE toscatopologytemplate ADD CONSTRAINT FK_ToscaTopologyTemplate_policyName FOREIGN KEY (policyName, policyVersion) REFERENCES toscapolicies (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | 23:16:46 policy-pap | [2024-03-11T23:14:52.585+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-a774452f-60f0-41d2-bd1a-6ce78860e297-3, groupId=a774452f-60f0-41d2-bd1a-6ce78860e297] Successfully synced group in generation Generation{generationId=1, memberId='consumer-a774452f-60f0-41d2-bd1a-6ce78860e297-3-cf0c0076-0825-4c64-9f0a-60e35d58c444', protocol='range'} 23:16:46 policy-pap | [2024-03-11T23:14:52.586+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) 23:16:46 policy-pap | [2024-03-11T23:14:52.586+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-a774452f-60f0-41d2-bd1a-6ce78860e297-3, groupId=a774452f-60f0-41d2-bd1a-6ce78860e297] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) 23:16:46 policy-pap | [2024-03-11T23:14:52.590+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Adding newly assigned partitions: policy-pdp-pap-0 23:16:46 policy-pap | [2024-03-11T23:14:52.590+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-a774452f-60f0-41d2-bd1a-6ce78860e297-3, groupId=a774452f-60f0-41d2-bd1a-6ce78860e297] Adding newly assigned partitions: policy-pdp-pap-0 23:16:46 policy-pap | [2024-03-11T23:14:52.612+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Found no committed offset for partition policy-pdp-pap-0 23:16:46 policy-pap | [2024-03-11T23:14:52.612+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-a774452f-60f0-41d2-bd1a-6ce78860e297-3, groupId=a774452f-60f0-41d2-bd1a-6ce78860e297] Found no committed offset for partition policy-pdp-pap-0 23:16:46 policy-pap | [2024-03-11T23:14:52.627+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-a774452f-60f0-41d2-bd1a-6ce78860e297-3, groupId=a774452f-60f0-41d2-bd1a-6ce78860e297] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. 23:16:46 policy-pap | [2024-03-11T23:14:52.627+00:00|INFO|SubscriptionState|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. 23:16:46 policy-pap | [2024-03-11T23:14:55.468+00:00|INFO|[/policy/pap/v1]|http-nio-6969-exec-6] Initializing Spring DispatcherServlet 'dispatcherServlet' 23:16:46 policy-pap | [2024-03-11T23:14:55.468+00:00|INFO|DispatcherServlet|http-nio-6969-exec-6] Initializing Servlet 'dispatcherServlet' 23:16:46 policy-pap | [2024-03-11T23:14:55.471+00:00|INFO|DispatcherServlet|http-nio-6969-exec-6] Completed initialization in 3 ms 23:16:46 policy-pap | [2024-03-11T23:15:09.847+00:00|INFO|OrderedServiceImpl|KAFKA-source-policy-heartbeat] ***** OrderedServiceImpl implementers: 23:16:46 policy-pap | [] 23:16:46 policy-pap | [2024-03-11T23:15:09.847+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:16:46 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"907895e9-f7c7-4aa2-a4fd-84e148a46751","timestampMs":1710198909812,"name":"apex-427b0bc7-703d-4d13-b440-d7b93ca39961","pdpGroup":"defaultGroup"} 23:16:46 policy-pap | [2024-03-11T23:15:09.848+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:46 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"907895e9-f7c7-4aa2-a4fd-84e148a46751","timestampMs":1710198909812,"name":"apex-427b0bc7-703d-4d13-b440-d7b93ca39961","pdpGroup":"defaultGroup"} 23:16:46 policy-pap | [2024-03-11T23:15:09.857+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus 23:16:46 policy-pap | [2024-03-11T23:15:09.938+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-427b0bc7-703d-4d13-b440-d7b93ca39961 PdpUpdate starting 23:16:46 policy-pap | [2024-03-11T23:15:09.938+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-427b0bc7-703d-4d13-b440-d7b93ca39961 PdpUpdate starting listener 23:16:46 policy-pap | [2024-03-11T23:15:09.938+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-427b0bc7-703d-4d13-b440-d7b93ca39961 PdpUpdate starting timer 23:16:46 policy-pap | [2024-03-11T23:15:09.938+00:00|INFO|TimerManager|KAFKA-source-policy-heartbeat] update timer registered Timer [name=d8098987-1a06-4b4b-bcac-21f90c18f0d0, expireMs=1710198939938] 23:16:46 policy-pap | [2024-03-11T23:15:09.940+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-427b0bc7-703d-4d13-b440-d7b93ca39961 PdpUpdate starting enqueue 23:16:46 policy-pap | [2024-03-11T23:15:09.940+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-427b0bc7-703d-4d13-b440-d7b93ca39961 PdpUpdate started 23:16:46 policy-pap | [2024-03-11T23:15:09.940+00:00|INFO|TimerManager|Thread-9] update timer waiting 29998ms Timer [name=d8098987-1a06-4b4b-bcac-21f90c18f0d0, expireMs=1710198939938] 23:16:46 policy-pap | [2024-03-11T23:15:09.943+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 23:16:46 policy-pap | {"source":"pap-dd126ff4-bdda-4dfa-b8de-79b15c74268d","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"d8098987-1a06-4b4b-bcac-21f90c18f0d0","timestampMs":1710198909920,"name":"apex-427b0bc7-703d-4d13-b440-d7b93ca39961","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:46 policy-pap | [2024-03-11T23:15:09.987+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:46 policy-pap | {"source":"pap-dd126ff4-bdda-4dfa-b8de-79b15c74268d","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"d8098987-1a06-4b4b-bcac-21f90c18f0d0","timestampMs":1710198909920,"name":"apex-427b0bc7-703d-4d13-b440-d7b93ca39961","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:46 policy-pap | [2024-03-11T23:15:09.988+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE 23:16:46 policy-pap | [2024-03-11T23:15:09.988+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:16:46 policy-pap | {"source":"pap-dd126ff4-bdda-4dfa-b8de-79b15c74268d","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"d8098987-1a06-4b4b-bcac-21f90c18f0d0","timestampMs":1710198909920,"name":"apex-427b0bc7-703d-4d13-b440-d7b93ca39961","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:46 policy-pap | [2024-03-11T23:15:09.988+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE 23:16:46 policy-pap | [2024-03-11T23:15:09.999+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:46 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"99559673-7f3e-410b-a70d-cce89103c5e6","timestampMs":1710198909988,"name":"apex-427b0bc7-703d-4d13-b440-d7b93ca39961","pdpGroup":"defaultGroup"} 23:16:46 policy-pap | [2024-03-11T23:15:10.000+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus 23:16:46 policy-pap | [2024-03-11T23:15:10.000+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:46 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"d8098987-1a06-4b4b-bcac-21f90c18f0d0","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"ddde4219-3842-42f4-b1e0-15afacfe0147","timestampMs":1710198909989,"name":"apex-427b0bc7-703d-4d13-b440-d7b93ca39961","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:46 policy-pap | [2024-03-11T23:15:10.002+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:18.122754044Z level=info msg="Migration successfully executed" id="add action column to seed_assignment" duration=9.79388ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:18.127512461Z level=info msg="Executing migration" id="add scope column to seed_assignment" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:18.137095006Z level=info msg="Migration successfully executed" id="add scope column to seed_assignment" duration=9.581284ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:18.140729039Z level=info msg="Executing migration" id="remove unique index builtin_role_role_name before nullable update" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:18.141594987Z level=info msg="Migration successfully executed" id="remove unique index builtin_role_role_name before nullable update" duration=864.328µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:18.147524678Z level=info msg="Executing migration" id="update seed_assignment role_name column to nullable" 23:16:46 kafka | [2024-03-11 23:14:48,645] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-19 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,645] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-34 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,645] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-4 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,645] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-11 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,645] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-26 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,645] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-49 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,645] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-39 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,645] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-9 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,645] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-24 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,645] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-31 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,645] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-46 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,645] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-1 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,645] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-16 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,645] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-2 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,645] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-25 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,645] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-40 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,645] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-47 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,645] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-17 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,645] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-32 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,645] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-37 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,646] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-7 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,646] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-22 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,646] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-29 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,646] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-44 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,646] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-14 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,646] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-23 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,646] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-38 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,646] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-8 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,646] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,646] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-45 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,646] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-15 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,646] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-30 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,646] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-0 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,646] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-35 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,646] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-5 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,646] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-20 (state.change.logger) 23:16:46 policy-db-migrator | > upgrade 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT TscaServiceTemplatetopologyTemplateParentLocalName FOREIGN KEY (topologyTemplateParentLocalName, topologyTemplateLocalName, topologyTemplateParentKeyVersion, topologyTemplateParentKeyName) REFERENCES toscatopologytemplate (parentLocalName, localName, parentKeyVersion, parentKeyName) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | > upgrade 0100-pdp.sql 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | ALTER TABLE pdp ADD COLUMN LASTUPDATE datetime NOT NULL DEFAULT CURRENT_TIMESTAMP AFTER HEALTHY 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | > upgrade 0110-idx_tsidx1.sql 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | CREATE INDEX IDX_TSIDX1 ON pdpstatistics(timeStamp, name, version) 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | > upgrade 0120-pk_pdpstatistics.sql 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | ALTER TABLE pdpstatistics DROP PRIMARY KEY 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | > upgrade 0130-pdpstatistics.sql 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | ALTER TABLE pdpstatistics ADD COLUMN POLICYUNDEPLOYCOUNT BIGINT DEFAULT NULL AFTER POLICYEXECUTEDSUCCESSCOUNT, ADD COLUMN POLICYUNDEPLOYFAILCOUNT BIGINT DEFAULT NULL, ADD COLUMN POLICYUNDEPLOYSUCCESSCOUNT BIGINT DEFAULT NULL, ADD COLUMN ID BIGINT NOT NULL 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | > upgrade 0140-pk_pdpstatistics.sql 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | UPDATE pdpstatistics as p JOIN (SELECT name, version, timeStamp, ROW_NUMBER() OVER (ORDER BY timeStamp ASC) AS row_num FROM pdpstatistics GROUP BY name, version, timeStamp) AS t ON (p.name=t.name AND p.version=t.version AND p.timeStamp = t.timeStamp) SET p.id=t.row_num 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | ALTER TABLE pdpstatistics ADD CONSTRAINT PK_PDPSTATISTICS PRIMARY KEY (ID, name, version) 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | > upgrade 0150-pdpstatistics.sql 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | ALTER TABLE pdpstatistics MODIFY COLUMN timeStamp datetime(6) NULL 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | > upgrade 0160-jpapdpstatistics_enginestats.sql 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | ALTER TABLE jpapdpstatistics_enginestats ADD COLUMN ID BIGINT DEFAULT NULL AFTER UPTIME 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | > upgrade 0170-jpapdpstatistics_enginestats.sql 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | UPDATE jpapdpstatistics_enginestats a 23:16:46 policy-db-migrator | JOIN pdpstatistics b 23:16:46 policy-db-migrator | ON a.name = b.name AND a.version = b.version AND a.timeStamp = b.timeStamp 23:16:46 policy-db-migrator | SET a.id = b.id 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | > upgrade 0180-jpapdpstatistics_enginestats.sql 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | ALTER TABLE jpapdpstatistics_enginestats DROP COLUMN timeStamp 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | 23:16:46 kafka | [2024-03-11 23:14:48,646] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-27 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,646] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-42 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,646] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-12 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,646] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-21 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,646] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-36 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,646] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-6 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,646] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-43 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,646] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-13 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,646] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-28 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,647] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions HashSet(__consumer_offsets-22, __consumer_offsets-30, __consumer_offsets-25, __consumer_offsets-35, __consumer_offsets-38, __consumer_offsets-13, __consumer_offsets-8, __consumer_offsets-21, __consumer_offsets-4, __consumer_offsets-27, __consumer_offsets-7, __consumer_offsets-9, __consumer_offsets-46, __consumer_offsets-41, __consumer_offsets-33, __consumer_offsets-23, __consumer_offsets-49, __consumer_offsets-47, __consumer_offsets-16, __consumer_offsets-28, __consumer_offsets-31, __consumer_offsets-36, __consumer_offsets-42, __consumer_offsets-3, __consumer_offsets-18, __consumer_offsets-37, policy-pdp-pap-0, __consumer_offsets-15, __consumer_offsets-24, __consumer_offsets-17, __consumer_offsets-48, __consumer_offsets-19, __consumer_offsets-11, __consumer_offsets-2, __consumer_offsets-43, __consumer_offsets-6, __consumer_offsets-14, __consumer_offsets-20, __consumer_offsets-0, __consumer_offsets-44, __consumer_offsets-39, __consumer_offsets-12, __consumer_offsets-45, __consumer_offsets-1, __consumer_offsets-5, __consumer_offsets-26, __consumer_offsets-29, __consumer_offsets-34, __consumer_offsets-10, __consumer_offsets-32, __consumer_offsets-40) (kafka.server.ReplicaFetcherManager) 23:16:46 kafka | [2024-03-11 23:14:48,647] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 1 from controller 1 epoch 1 as part of the become-leader transition for 51 partitions (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,717] INFO [LogLoader partition=__consumer_offsets-3, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:46 kafka | [2024-03-11 23:14:48,734] INFO Created log for partition __consumer_offsets-3 in /var/lib/kafka/data/__consumer_offsets-3 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:46 kafka | [2024-03-11 23:14:48,739] INFO [Partition __consumer_offsets-3 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-3 (kafka.cluster.Partition) 23:16:46 kafka | [2024-03-11 23:14:48,741] INFO [Partition __consumer_offsets-3 broker=1] Log loaded for partition __consumer_offsets-3 with initial high watermark 0 (kafka.cluster.Partition) 23:16:46 kafka | [2024-03-11 23:14:48,744] INFO [Broker id=1] Leader __consumer_offsets-3 with topic id Some(pKZWw3Z-TCqlQNYpHAZfkA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,761] INFO [LogLoader partition=__consumer_offsets-18, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:46 kafka | [2024-03-11 23:14:48,762] INFO Created log for partition __consumer_offsets-18 in /var/lib/kafka/data/__consumer_offsets-18 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:46 kafka | [2024-03-11 23:14:48,762] INFO [Partition __consumer_offsets-18 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-18 (kafka.cluster.Partition) 23:16:46 kafka | [2024-03-11 23:14:48,763] INFO [Partition __consumer_offsets-18 broker=1] Log loaded for partition __consumer_offsets-18 with initial high watermark 0 (kafka.cluster.Partition) 23:16:46 kafka | [2024-03-11 23:14:48,763] INFO [Broker id=1] Leader __consumer_offsets-18 with topic id Some(pKZWw3Z-TCqlQNYpHAZfkA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,773] INFO [LogLoader partition=__consumer_offsets-41, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:46 kafka | [2024-03-11 23:14:48,774] INFO Created log for partition __consumer_offsets-41 in /var/lib/kafka/data/__consumer_offsets-41 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:46 kafka | [2024-03-11 23:14:48,774] INFO [Partition __consumer_offsets-41 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-41 (kafka.cluster.Partition) 23:16:46 kafka | [2024-03-11 23:14:48,774] INFO [Partition __consumer_offsets-41 broker=1] Log loaded for partition __consumer_offsets-41 with initial high watermark 0 (kafka.cluster.Partition) 23:16:46 kafka | [2024-03-11 23:14:48,774] INFO [Broker id=1] Leader __consumer_offsets-41 with topic id Some(pKZWw3Z-TCqlQNYpHAZfkA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,784] INFO [LogLoader partition=__consumer_offsets-10, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:46 kafka | [2024-03-11 23:14:48,785] INFO Created log for partition __consumer_offsets-10 in /var/lib/kafka/data/__consumer_offsets-10 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:46 kafka | [2024-03-11 23:14:48,789] INFO [Partition __consumer_offsets-10 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-10 (kafka.cluster.Partition) 23:16:46 kafka | [2024-03-11 23:14:48,789] INFO [Partition __consumer_offsets-10 broker=1] Log loaded for partition __consumer_offsets-10 with initial high watermark 0 (kafka.cluster.Partition) 23:16:46 policy-db-migrator | > upgrade 0190-jpapolicyaudit.sql 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapolicyaudit (ACTION INT DEFAULT NULL, PDPGROUP VARCHAR(255) NULL, PDPTYPE VARCHAR(255) NULL, TIMESTAMP datetime DEFAULT NULL, USER VARCHAR(255) NULL, ID BIGINT NOT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_JPAPOLICYAUDIT (ID, name, version)) 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | > upgrade 0200-JpaPolicyAuditIndex_timestamp.sql 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | CREATE INDEX JpaPolicyAuditIndex_timestamp ON jpapolicyaudit(TIMESTAMP) 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | > upgrade 0210-sequence.sql 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | > upgrade 0220-sequence.sql 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | INSERT INTO sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM pdpstatistics)) 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | > upgrade 0100-jpatoscapolicy_targets.sql 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | ALTER TABLE jpatoscapolicy_targets ADD COLUMN toscaPolicyName VARCHAR(120) NOT NULL, ADD COLUMN toscaPolicyVersion VARCHAR(20) NOT NULL, ADD CONSTRAINT PK_JPATOSCAPOLICY_TARGETS PRIMARY KEY (toscaPolicyName, toscaPolicyVersion) 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | > upgrade 0110-jpatoscapolicytype_targets.sql 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | ALTER TABLE jpatoscapolicytype_targets ADD COLUMN toscaPolicyTypeName VARCHAR(120) NOT NULL, ADD COLUMN toscaPolicyTypeVersion VARCHAR(20) NOT NULL, ADD CONSTRAINT PK_JPATOSCAPOLICYTYPE_TARGETS PRIMARY KEY (toscaPolicyTypeName, toscaPolicyTypeVersion) 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | > upgrade 0120-toscatrigger.sql 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | DROP TABLE IF EXISTS toscatrigger 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | > upgrade 0130-jpatoscapolicytype_triggers.sql 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | ALTER TABLE jpatoscapolicytype_triggers MODIFY COLUMN triggers LONGBLOB 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | > upgrade 0140-toscaparameter.sql 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | DROP TABLE IF EXISTS toscaparameter 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | > upgrade 0150-toscaproperty.sql 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | DROP TABLE IF EXISTS jpatoscaproperty_constraints 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | DROP TABLE IF EXISTS jpatoscaproperty_metadata 23:16:46 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"99559673-7f3e-410b-a70d-cce89103c5e6","timestampMs":1710198909988,"name":"apex-427b0bc7-703d-4d13-b440-d7b93ca39961","pdpGroup":"defaultGroup"} 23:16:46 policy-pap | [2024-03-11T23:15:10.003+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-427b0bc7-703d-4d13-b440-d7b93ca39961 PdpUpdate stopping 23:16:46 policy-pap | [2024-03-11T23:15:10.003+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-427b0bc7-703d-4d13-b440-d7b93ca39961 PdpUpdate stopping enqueue 23:16:46 policy-pap | [2024-03-11T23:15:10.004+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-427b0bc7-703d-4d13-b440-d7b93ca39961 PdpUpdate stopping timer 23:16:46 policy-pap | [2024-03-11T23:15:10.004+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=d8098987-1a06-4b4b-bcac-21f90c18f0d0, expireMs=1710198939938] 23:16:46 policy-pap | [2024-03-11T23:15:10.004+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-427b0bc7-703d-4d13-b440-d7b93ca39961 PdpUpdate stopping listener 23:16:46 policy-pap | [2024-03-11T23:15:10.004+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-427b0bc7-703d-4d13-b440-d7b93ca39961 PdpUpdate stopped 23:16:46 policy-pap | [2024-03-11T23:15:10.011+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-427b0bc7-703d-4d13-b440-d7b93ca39961 PdpUpdate successful 23:16:46 policy-pap | [2024-03-11T23:15:10.011+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-427b0bc7-703d-4d13-b440-d7b93ca39961 start publishing next request 23:16:46 policy-pap | [2024-03-11T23:15:10.011+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-427b0bc7-703d-4d13-b440-d7b93ca39961 PdpStateChange starting 23:16:46 policy-pap | [2024-03-11T23:15:10.011+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-427b0bc7-703d-4d13-b440-d7b93ca39961 PdpStateChange starting listener 23:16:46 policy-pap | [2024-03-11T23:15:10.011+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-427b0bc7-703d-4d13-b440-d7b93ca39961 PdpStateChange starting timer 23:16:46 policy-pap | [2024-03-11T23:15:10.012+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer registered Timer [name=d27320f5-5a2a-4f8f-ac8b-2a07f89773ff, expireMs=1710198940011] 23:16:46 policy-pap | [2024-03-11T23:15:10.012+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-427b0bc7-703d-4d13-b440-d7b93ca39961 PdpStateChange starting enqueue 23:16:46 policy-pap | [2024-03-11T23:15:10.012+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-427b0bc7-703d-4d13-b440-d7b93ca39961 PdpStateChange started 23:16:46 policy-pap | [2024-03-11T23:15:10.012+00:00|INFO|TimerManager|Thread-10] state-change timer waiting 29999ms Timer [name=d27320f5-5a2a-4f8f-ac8b-2a07f89773ff, expireMs=1710198940011] 23:16:46 policy-pap | [2024-03-11T23:15:10.012+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 23:16:46 policy-pap | {"source":"pap-dd126ff4-bdda-4dfa-b8de-79b15c74268d","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"d27320f5-5a2a-4f8f-ac8b-2a07f89773ff","timestampMs":1710198909921,"name":"apex-427b0bc7-703d-4d13-b440-d7b93ca39961","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:46 policy-pap | [2024-03-11T23:15:10.051+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:46 policy-pap | {"source":"pap-dd126ff4-bdda-4dfa-b8de-79b15c74268d","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"d27320f5-5a2a-4f8f-ac8b-2a07f89773ff","timestampMs":1710198909921,"name":"apex-427b0bc7-703d-4d13-b440-d7b93ca39961","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:46 policy-pap | [2024-03-11T23:15:10.051+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATE_CHANGE 23:16:46 policy-pap | [2024-03-11T23:15:10.056+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:46 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"d27320f5-5a2a-4f8f-ac8b-2a07f89773ff","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"e31beda6-456f-45d4-901d-ef2340703340","timestampMs":1710198910027,"name":"apex-427b0bc7-703d-4d13-b440-d7b93ca39961","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:46 policy-pap | [2024-03-11T23:15:10.074+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-427b0bc7-703d-4d13-b440-d7b93ca39961 PdpStateChange stopping 23:16:46 policy-pap | [2024-03-11T23:15:10.074+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-427b0bc7-703d-4d13-b440-d7b93ca39961 PdpStateChange stopping enqueue 23:16:46 policy-pap | [2024-03-11T23:15:10.074+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-427b0bc7-703d-4d13-b440-d7b93ca39961 PdpStateChange stopping timer 23:16:46 policy-pap | [2024-03-11T23:15:10.074+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer cancelled Timer [name=d27320f5-5a2a-4f8f-ac8b-2a07f89773ff, expireMs=1710198940011] 23:16:46 policy-pap | [2024-03-11T23:15:10.074+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-427b0bc7-703d-4d13-b440-d7b93ca39961 PdpStateChange stopping listener 23:16:46 policy-pap | [2024-03-11T23:15:10.074+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-427b0bc7-703d-4d13-b440-d7b93ca39961 PdpStateChange stopped 23:16:46 policy-pap | [2024-03-11T23:15:10.074+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-427b0bc7-703d-4d13-b440-d7b93ca39961 PdpStateChange successful 23:16:46 policy-pap | [2024-03-11T23:15:10.074+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-427b0bc7-703d-4d13-b440-d7b93ca39961 start publishing next request 23:16:46 policy-pap | [2024-03-11T23:15:10.074+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-427b0bc7-703d-4d13-b440-d7b93ca39961 PdpUpdate starting 23:16:46 policy-pap | [2024-03-11T23:15:10.074+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-427b0bc7-703d-4d13-b440-d7b93ca39961 PdpUpdate starting listener 23:16:46 policy-pap | [2024-03-11T23:15:10.074+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-427b0bc7-703d-4d13-b440-d7b93ca39961 PdpUpdate starting timer 23:16:46 policy-pap | [2024-03-11T23:15:10.074+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer registered Timer [name=30caae45-2896-49b2-8943-7c3be2c45732, expireMs=1710198940074] 23:16:46 policy-pap | [2024-03-11T23:15:10.075+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-427b0bc7-703d-4d13-b440-d7b93ca39961 PdpUpdate starting enqueue 23:16:46 policy-pap | [2024-03-11T23:15:10.075+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-427b0bc7-703d-4d13-b440-d7b93ca39961 PdpUpdate started 23:16:46 policy-pap | [2024-03-11T23:15:10.075+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:16:46 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"d8098987-1a06-4b4b-bcac-21f90c18f0d0","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"ddde4219-3842-42f4-b1e0-15afacfe0147","timestampMs":1710198909989,"name":"apex-427b0bc7-703d-4d13-b440-d7b93ca39961","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:46 policy-pap | [2024-03-11T23:15:10.075+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 23:16:46 policy-pap | {"source":"pap-dd126ff4-bdda-4dfa-b8de-79b15c74268d","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"30caae45-2896-49b2-8943-7c3be2c45732","timestampMs":1710198910043,"name":"apex-427b0bc7-703d-4d13-b440-d7b93ca39961","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:46 policy-pap | [2024-03-11T23:15:10.075+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id d8098987-1a06-4b4b-bcac-21f90c18f0d0 23:16:46 policy-pap | [2024-03-11T23:15:10.080+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:18.228547486Z level=info msg="Migration successfully executed" id="update seed_assignment role_name column to nullable" duration=81.007387ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:18.233673889Z level=info msg="Executing migration" id="add unique index builtin_role_name back" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:18.235744002Z level=info msg="Migration successfully executed" id="add unique index builtin_role_name back" duration=2.071472ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:18.241096021Z level=info msg="Executing migration" id="add unique index builtin_role_action_scope" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:18.242372457Z level=info msg="Migration successfully executed" id="add unique index builtin_role_action_scope" duration=1.275826ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:18.24598961Z level=info msg="Executing migration" id="add primary key to seed_assigment" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:18.27106245Z level=info msg="Migration successfully executed" id="add primary key to seed_assigment" duration=25.07291ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:18.277165724Z level=info msg="Executing migration" id="add origin column to seed_assignment" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:18.284631266Z level=info msg="Migration successfully executed" id="add origin column to seed_assignment" duration=7.466122ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:18.289487494Z level=info msg="Executing migration" id="add origin to plugin seed_assignment" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:18.289795261Z level=info msg="Migration successfully executed" id="add origin to plugin seed_assignment" duration=307.867µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:18.293340513Z level=info msg="Executing migration" id="prevent seeding OnCall access" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:18.293508816Z level=info msg="Migration successfully executed" id="prevent seeding OnCall access" duration=168.853µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:18.298239333Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated fixed migration" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:18.298758084Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated fixed migration" duration=518.131µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:18.30306126Z level=info msg="Executing migration" id="managed folder permissions library panel actions migration" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:18.303579221Z level=info msg="Migration successfully executed" id="managed folder permissions library panel actions migration" duration=515.001µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:18.308661004Z level=info msg="Executing migration" id="migrate external alertmanagers to datsourcse" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:18.309032312Z level=info msg="Migration successfully executed" id="migrate external alertmanagers to datsourcse" duration=371.798µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:18.313475542Z level=info msg="Executing migration" id="create folder table" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:18.314659466Z level=info msg="Migration successfully executed" id="create folder table" duration=1.182074ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:18.319551966Z level=info msg="Executing migration" id="Add index for parent_uid" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:18.321877113Z level=info msg="Migration successfully executed" id="Add index for parent_uid" duration=2.324077ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:18.328462797Z level=info msg="Executing migration" id="Add unique index for folder.uid and folder.org_id" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:18.329790685Z level=info msg="Migration successfully executed" id="Add unique index for folder.uid and folder.org_id" duration=1.328318ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:18.333310186Z level=info msg="Executing migration" id="Update folder title length" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:18.333356707Z level=info msg="Migration successfully executed" id="Update folder title length" duration=43.811µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:18.338518962Z level=info msg="Executing migration" id="Add unique index for folder.title and folder.parent_uid" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:18.340488092Z level=info msg="Migration successfully executed" id="Add unique index for folder.title and folder.parent_uid" duration=1.97025ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:18.344401801Z level=info msg="Executing migration" id="Remove unique index for folder.title and folder.parent_uid" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:18.345815131Z level=info msg="Migration successfully executed" id="Remove unique index for folder.title and folder.parent_uid" duration=1.41329ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:18.349379453Z level=info msg="Executing migration" id="Add unique index for title, parent_uid, and org_id" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:18.350644068Z level=info msg="Migration successfully executed" id="Add unique index for title, parent_uid, and org_id" duration=1.263185ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:18.355214512Z level=info msg="Executing migration" id="Sync dashboard and folder table" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:18.35566194Z level=info msg="Migration successfully executed" id="Sync dashboard and folder table" duration=447.519µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:18.360317746Z level=info msg="Executing migration" id="Remove ghost folders from the folder table" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:18.360591431Z level=info msg="Migration successfully executed" id="Remove ghost folders from the folder table" duration=274.065µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:18.366858158Z level=info msg="Executing migration" id="Remove unique index UQE_folder_uid_org_id" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:18.368259857Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_uid_org_id" duration=1.398079ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:18.401075114Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_uid" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:18.40285927Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_uid" duration=1.783996ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:18.408066686Z level=info msg="Executing migration" id="Remove unique index UQE_folder_title_parent_uid_org_id" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:18.410025826Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_title_parent_uid_org_id" duration=1.95817ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:18.414061078Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_parent_uid_title" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:18.415365704Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_parent_uid_title" duration=1.302886ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:18.422379997Z level=info msg="Executing migration" id="Remove index IDX_folder_parent_uid_org_id" 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | DROP TABLE IF EXISTS toscaproperty 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | > upgrade 0160-jpapolicyaudit_pk.sql 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | ALTER TABLE jpapolicyaudit DROP PRIMARY KEY 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | ALTER TABLE jpapolicyaudit ADD CONSTRAINT PK_JPAPOLICYAUDIT PRIMARY KEY (ID) 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | > upgrade 0170-pdpstatistics_pk.sql 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | ALTER TABLE pdpstatistics DROP PRIMARY KEY 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | ALTER TABLE pdpstatistics ADD CONSTRAINT PK_PDPSTATISTICS PRIMARY KEY (ID) 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | > upgrade 0180-jpatoscanodetemplate_metadata.sql 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | ALTER TABLE jpatoscanodetemplate_metadata MODIFY COLUMN METADATA LONGTEXT 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | > upgrade 0100-upgrade.sql 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | select 'upgrade to 1100 completed' as msg 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | msg 23:16:46 policy-db-migrator | upgrade to 1100 completed 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | > upgrade 0100-jpapolicyaudit_renameuser.sql 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | ALTER TABLE jpapolicyaudit RENAME COLUMN USER TO USERNAME 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | > upgrade 0110-idx_tsidx1.sql 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | DROP INDEX IDX_TSIDX1 ON pdpstatistics 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | CREATE INDEX IDXTSIDX1 ON pdpstatistics(timeStamp, name, version) 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | > upgrade 0120-audit_sequence.sql 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS audit_sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | INSERT INTO audit_sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM jpapolicyaudit)) 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | > upgrade 0130-statistics_sequence.sql 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | CREATE TABLE IF NOT EXISTS statistics_sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | INSERT INTO statistics_sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM pdpstatistics)) 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | TRUNCATE TABLE sequence 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | > upgrade 0100-pdpstatistics.sql 23:16:46 kafka | [2024-03-11 23:14:48,790] INFO [Broker id=1] Leader __consumer_offsets-10 with topic id Some(pKZWw3Z-TCqlQNYpHAZfkA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,802] INFO [LogLoader partition=__consumer_offsets-33, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:46 kafka | [2024-03-11 23:14:48,803] INFO Created log for partition __consumer_offsets-33 in /var/lib/kafka/data/__consumer_offsets-33 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:46 kafka | [2024-03-11 23:14:48,803] INFO [Partition __consumer_offsets-33 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-33 (kafka.cluster.Partition) 23:16:46 kafka | [2024-03-11 23:14:48,803] INFO [Partition __consumer_offsets-33 broker=1] Log loaded for partition __consumer_offsets-33 with initial high watermark 0 (kafka.cluster.Partition) 23:16:46 kafka | [2024-03-11 23:14:48,804] INFO [Broker id=1] Leader __consumer_offsets-33 with topic id Some(pKZWw3Z-TCqlQNYpHAZfkA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,816] INFO [LogLoader partition=__consumer_offsets-48, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:46 kafka | [2024-03-11 23:14:48,816] INFO Created log for partition __consumer_offsets-48 in /var/lib/kafka/data/__consumer_offsets-48 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:46 kafka | [2024-03-11 23:14:48,816] INFO [Partition __consumer_offsets-48 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-48 (kafka.cluster.Partition) 23:16:46 kafka | [2024-03-11 23:14:48,816] INFO [Partition __consumer_offsets-48 broker=1] Log loaded for partition __consumer_offsets-48 with initial high watermark 0 (kafka.cluster.Partition) 23:16:46 kafka | [2024-03-11 23:14:48,816] INFO [Broker id=1] Leader __consumer_offsets-48 with topic id Some(pKZWw3Z-TCqlQNYpHAZfkA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,827] INFO [LogLoader partition=__consumer_offsets-19, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:46 kafka | [2024-03-11 23:14:48,828] INFO Created log for partition __consumer_offsets-19 in /var/lib/kafka/data/__consumer_offsets-19 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:46 kafka | [2024-03-11 23:14:48,828] INFO [Partition __consumer_offsets-19 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-19 (kafka.cluster.Partition) 23:16:46 kafka | [2024-03-11 23:14:48,828] INFO [Partition __consumer_offsets-19 broker=1] Log loaded for partition __consumer_offsets-19 with initial high watermark 0 (kafka.cluster.Partition) 23:16:46 kafka | [2024-03-11 23:14:48,828] INFO [Broker id=1] Leader __consumer_offsets-19 with topic id Some(pKZWw3Z-TCqlQNYpHAZfkA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,843] INFO [LogLoader partition=__consumer_offsets-34, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:46 kafka | [2024-03-11 23:14:48,844] INFO Created log for partition __consumer_offsets-34 in /var/lib/kafka/data/__consumer_offsets-34 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:46 kafka | [2024-03-11 23:14:48,844] INFO [Partition __consumer_offsets-34 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-34 (kafka.cluster.Partition) 23:16:46 kafka | [2024-03-11 23:14:48,845] INFO [Partition __consumer_offsets-34 broker=1] Log loaded for partition __consumer_offsets-34 with initial high watermark 0 (kafka.cluster.Partition) 23:16:46 kafka | [2024-03-11 23:14:48,845] INFO [Broker id=1] Leader __consumer_offsets-34 with topic id Some(pKZWw3Z-TCqlQNYpHAZfkA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,862] INFO [LogLoader partition=__consumer_offsets-4, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:46 kafka | [2024-03-11 23:14:48,863] INFO Created log for partition __consumer_offsets-4 in /var/lib/kafka/data/__consumer_offsets-4 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:46 kafka | [2024-03-11 23:14:48,863] INFO [Partition __consumer_offsets-4 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-4 (kafka.cluster.Partition) 23:16:46 kafka | [2024-03-11 23:14:48,863] INFO [Partition __consumer_offsets-4 broker=1] Log loaded for partition __consumer_offsets-4 with initial high watermark 0 (kafka.cluster.Partition) 23:16:46 kafka | [2024-03-11 23:14:48,864] INFO [Broker id=1] Leader __consumer_offsets-4 with topic id Some(pKZWw3Z-TCqlQNYpHAZfkA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,875] INFO [LogLoader partition=__consumer_offsets-11, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:46 kafka | [2024-03-11 23:14:48,876] INFO Created log for partition __consumer_offsets-11 in /var/lib/kafka/data/__consumer_offsets-11 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:46 kafka | [2024-03-11 23:14:48,876] INFO [Partition __consumer_offsets-11 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-11 (kafka.cluster.Partition) 23:16:46 kafka | [2024-03-11 23:14:48,876] INFO [Partition __consumer_offsets-11 broker=1] Log loaded for partition __consumer_offsets-11 with initial high watermark 0 (kafka.cluster.Partition) 23:16:46 kafka | [2024-03-11 23:14:48,876] INFO [Broker id=1] Leader __consumer_offsets-11 with topic id Some(pKZWw3Z-TCqlQNYpHAZfkA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,890] INFO [LogLoader partition=__consumer_offsets-26, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:46 kafka | [2024-03-11 23:14:48,892] INFO Created log for partition __consumer_offsets-26 in /var/lib/kafka/data/__consumer_offsets-26 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:46 kafka | [2024-03-11 23:14:48,892] INFO [Partition __consumer_offsets-26 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-26 (kafka.cluster.Partition) 23:16:46 kafka | [2024-03-11 23:14:48,892] INFO [Partition __consumer_offsets-26 broker=1] Log loaded for partition __consumer_offsets-26 with initial high watermark 0 (kafka.cluster.Partition) 23:16:46 kafka | [2024-03-11 23:14:48,892] INFO [Broker id=1] Leader __consumer_offsets-26 with topic id Some(pKZWw3Z-TCqlQNYpHAZfkA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,906] INFO [LogLoader partition=__consumer_offsets-49, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:46 kafka | [2024-03-11 23:14:48,907] INFO Created log for partition __consumer_offsets-49 in /var/lib/kafka/data/__consumer_offsets-49 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:46 kafka | [2024-03-11 23:14:48,907] INFO [Partition __consumer_offsets-49 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-49 (kafka.cluster.Partition) 23:16:46 kafka | [2024-03-11 23:14:48,907] INFO [Partition __consumer_offsets-49 broker=1] Log loaded for partition __consumer_offsets-49 with initial high watermark 0 (kafka.cluster.Partition) 23:16:46 kafka | [2024-03-11 23:14:48,907] INFO [Broker id=1] Leader __consumer_offsets-49 with topic id Some(pKZWw3Z-TCqlQNYpHAZfkA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,919] INFO [LogLoader partition=__consumer_offsets-39, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:46 kafka | [2024-03-11 23:14:48,920] INFO Created log for partition __consumer_offsets-39 in /var/lib/kafka/data/__consumer_offsets-39 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:46 kafka | [2024-03-11 23:14:48,920] INFO [Partition __consumer_offsets-39 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-39 (kafka.cluster.Partition) 23:16:46 kafka | [2024-03-11 23:14:48,920] INFO [Partition __consumer_offsets-39 broker=1] Log loaded for partition __consumer_offsets-39 with initial high watermark 0 (kafka.cluster.Partition) 23:16:46 kafka | [2024-03-11 23:14:48,920] INFO [Broker id=1] Leader __consumer_offsets-39 with topic id Some(pKZWw3Z-TCqlQNYpHAZfkA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,929] INFO [LogLoader partition=__consumer_offsets-9, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:46 kafka | [2024-03-11 23:14:48,929] INFO Created log for partition __consumer_offsets-9 in /var/lib/kafka/data/__consumer_offsets-9 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:46 kafka | [2024-03-11 23:14:48,929] INFO [Partition __consumer_offsets-9 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-9 (kafka.cluster.Partition) 23:16:46 kafka | [2024-03-11 23:14:48,929] INFO [Partition __consumer_offsets-9 broker=1] Log loaded for partition __consumer_offsets-9 with initial high watermark 0 (kafka.cluster.Partition) 23:16:46 kafka | [2024-03-11 23:14:48,929] INFO [Broker id=1] Leader __consumer_offsets-9 with topic id Some(pKZWw3Z-TCqlQNYpHAZfkA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,937] INFO [LogLoader partition=__consumer_offsets-24, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:46 kafka | [2024-03-11 23:14:48,938] INFO Created log for partition __consumer_offsets-24 in /var/lib/kafka/data/__consumer_offsets-24 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:46 kafka | [2024-03-11 23:14:48,938] INFO [Partition __consumer_offsets-24 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-24 (kafka.cluster.Partition) 23:16:46 kafka | [2024-03-11 23:14:48,938] INFO [Partition __consumer_offsets-24 broker=1] Log loaded for partition __consumer_offsets-24 with initial high watermark 0 (kafka.cluster.Partition) 23:16:46 kafka | [2024-03-11 23:14:48,938] INFO [Broker id=1] Leader __consumer_offsets-24 with topic id Some(pKZWw3Z-TCqlQNYpHAZfkA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,947] INFO [LogLoader partition=__consumer_offsets-31, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:46 kafka | [2024-03-11 23:14:48,949] INFO Created log for partition __consumer_offsets-31 in /var/lib/kafka/data/__consumer_offsets-31 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:46 kafka | [2024-03-11 23:14:48,950] INFO [Partition __consumer_offsets-31 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-31 (kafka.cluster.Partition) 23:16:46 kafka | [2024-03-11 23:14:48,951] INFO [Partition __consumer_offsets-31 broker=1] Log loaded for partition __consumer_offsets-31 with initial high watermark 0 (kafka.cluster.Partition) 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | DROP INDEX IDXTSIDX1 ON pdpstatistics 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | DROP TABLE pdpstatistics 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | DROP TABLE jpapdpstatistics_enginestats 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | > upgrade 0120-statistics_sequence.sql 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | DROP TABLE statistics_sequence 23:16:46 policy-db-migrator | -------------- 23:16:46 policy-db-migrator | 23:16:46 policy-db-migrator | policyadmin: OK: upgrade (1300) 23:16:46 policy-db-migrator | name version 23:16:46 policy-db-migrator | policyadmin 1300 23:16:46 policy-db-migrator | ID script operation from_version to_version tag success atTime 23:16:46 policy-db-migrator | 1 0100-jpapdpgroup_properties.sql upgrade 0 0800 1103242314150800u 1 2024-03-11 23:14:15 23:16:46 policy-db-migrator | 2 0110-jpapdpstatistics_enginestats.sql upgrade 0 0800 1103242314150800u 1 2024-03-11 23:14:15 23:16:46 policy-db-migrator | 3 0120-jpapdpsubgroup_policies.sql upgrade 0 0800 1103242314150800u 1 2024-03-11 23:14:15 23:16:46 policy-db-migrator | 4 0130-jpapdpsubgroup_properties.sql upgrade 0 0800 1103242314150800u 1 2024-03-11 23:14:15 23:16:46 policy-db-migrator | 5 0140-jpapdpsubgroup_supportedpolicytypes.sql upgrade 0 0800 1103242314150800u 1 2024-03-11 23:14:15 23:16:46 policy-db-migrator | 6 0150-jpatoscacapabilityassignment_attributes.sql upgrade 0 0800 1103242314150800u 1 2024-03-11 23:14:15 23:16:46 policy-db-migrator | 7 0160-jpatoscacapabilityassignment_metadata.sql upgrade 0 0800 1103242314150800u 1 2024-03-11 23:14:16 23:16:46 policy-db-migrator | 8 0170-jpatoscacapabilityassignment_occurrences.sql upgrade 0 0800 1103242314150800u 1 2024-03-11 23:14:16 23:16:46 policy-db-migrator | 9 0180-jpatoscacapabilityassignment_properties.sql upgrade 0 0800 1103242314150800u 1 2024-03-11 23:14:16 23:16:46 policy-db-migrator | 10 0190-jpatoscacapabilitytype_metadata.sql upgrade 0 0800 1103242314150800u 1 2024-03-11 23:14:16 23:16:46 policy-db-migrator | 11 0200-jpatoscacapabilitytype_properties.sql upgrade 0 0800 1103242314150800u 1 2024-03-11 23:14:16 23:16:46 policy-db-migrator | 12 0210-jpatoscadatatype_constraints.sql upgrade 0 0800 1103242314150800u 1 2024-03-11 23:14:16 23:16:46 policy-db-migrator | 13 0220-jpatoscadatatype_metadata.sql upgrade 0 0800 1103242314150800u 1 2024-03-11 23:14:16 23:16:46 policy-db-migrator | 14 0230-jpatoscadatatype_properties.sql upgrade 0 0800 1103242314150800u 1 2024-03-11 23:14:16 23:16:46 policy-db-migrator | 15 0240-jpatoscanodetemplate_metadata.sql upgrade 0 0800 1103242314150800u 1 2024-03-11 23:14:16 23:16:46 policy-db-migrator | 16 0250-jpatoscanodetemplate_properties.sql upgrade 0 0800 1103242314150800u 1 2024-03-11 23:14:16 23:16:46 policy-db-migrator | 17 0260-jpatoscanodetype_metadata.sql upgrade 0 0800 1103242314150800u 1 2024-03-11 23:14:16 23:16:46 policy-db-migrator | 18 0270-jpatoscanodetype_properties.sql upgrade 0 0800 1103242314150800u 1 2024-03-11 23:14:16 23:16:46 policy-db-migrator | 19 0280-jpatoscapolicy_metadata.sql upgrade 0 0800 1103242314150800u 1 2024-03-11 23:14:16 23:16:46 policy-db-migrator | 20 0290-jpatoscapolicy_properties.sql upgrade 0 0800 1103242314150800u 1 2024-03-11 23:14:16 23:16:46 policy-db-migrator | 21 0300-jpatoscapolicy_targets.sql upgrade 0 0800 1103242314150800u 1 2024-03-11 23:14:16 23:16:46 policy-db-migrator | 22 0310-jpatoscapolicytype_metadata.sql upgrade 0 0800 1103242314150800u 1 2024-03-11 23:14:16 23:16:46 policy-db-migrator | 23 0320-jpatoscapolicytype_properties.sql upgrade 0 0800 1103242314150800u 1 2024-03-11 23:14:16 23:16:46 policy-db-migrator | 24 0330-jpatoscapolicytype_targets.sql upgrade 0 0800 1103242314150800u 1 2024-03-11 23:14:16 23:16:46 policy-db-migrator | 25 0340-jpatoscapolicytype_triggers.sql upgrade 0 0800 1103242314150800u 1 2024-03-11 23:14:16 23:16:46 policy-db-migrator | 26 0350-jpatoscaproperty_constraints.sql upgrade 0 0800 1103242314150800u 1 2024-03-11 23:14:16 23:16:46 policy-db-migrator | 27 0360-jpatoscaproperty_metadata.sql upgrade 0 0800 1103242314150800u 1 2024-03-11 23:14:16 23:16:46 policy-db-migrator | 28 0370-jpatoscarelationshiptype_metadata.sql upgrade 0 0800 1103242314150800u 1 2024-03-11 23:14:16 23:16:46 policy-db-migrator | 29 0380-jpatoscarelationshiptype_properties.sql upgrade 0 0800 1103242314150800u 1 2024-03-11 23:14:16 23:16:46 policy-db-migrator | 30 0390-jpatoscarequirement_metadata.sql upgrade 0 0800 1103242314150800u 1 2024-03-11 23:14:17 23:16:46 policy-db-migrator | 31 0400-jpatoscarequirement_occurrences.sql upgrade 0 0800 1103242314150800u 1 2024-03-11 23:14:17 23:16:46 policy-db-migrator | 32 0410-jpatoscarequirement_properties.sql upgrade 0 0800 1103242314150800u 1 2024-03-11 23:14:17 23:16:46 policy-db-migrator | 33 0420-jpatoscaservicetemplate_metadata.sql upgrade 0 0800 1103242314150800u 1 2024-03-11 23:14:17 23:16:46 policy-db-migrator | 34 0430-jpatoscatopologytemplate_inputs.sql upgrade 0 0800 1103242314150800u 1 2024-03-11 23:14:17 23:16:46 policy-db-migrator | 35 0440-pdpgroup_pdpsubgroup.sql upgrade 0 0800 1103242314150800u 1 2024-03-11 23:14:17 23:16:46 policy-db-migrator | 36 0450-pdpgroup.sql upgrade 0 0800 1103242314150800u 1 2024-03-11 23:14:17 23:16:46 policy-db-migrator | 37 0460-pdppolicystatus.sql upgrade 0 0800 1103242314150800u 1 2024-03-11 23:14:17 23:16:46 policy-db-migrator | 38 0470-pdp.sql upgrade 0 0800 1103242314150800u 1 2024-03-11 23:14:17 23:16:46 policy-db-migrator | 39 0480-pdpstatistics.sql upgrade 0 0800 1103242314150800u 1 2024-03-11 23:14:17 23:16:46 policy-pap | {"source":"pap-dd126ff4-bdda-4dfa-b8de-79b15c74268d","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"d27320f5-5a2a-4f8f-ac8b-2a07f89773ff","timestampMs":1710198909921,"name":"apex-427b0bc7-703d-4d13-b440-d7b93ca39961","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:46 policy-pap | [2024-03-11T23:15:10.080+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_STATE_CHANGE 23:16:46 policy-pap | [2024-03-11T23:15:10.088+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:16:46 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"d27320f5-5a2a-4f8f-ac8b-2a07f89773ff","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"e31beda6-456f-45d4-901d-ef2340703340","timestampMs":1710198910027,"name":"apex-427b0bc7-703d-4d13-b440-d7b93ca39961","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:46 policy-pap | [2024-03-11T23:15:10.088+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:46 policy-pap | {"source":"pap-dd126ff4-bdda-4dfa-b8de-79b15c74268d","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"30caae45-2896-49b2-8943-7c3be2c45732","timestampMs":1710198910043,"name":"apex-427b0bc7-703d-4d13-b440-d7b93ca39961","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:46 policy-pap | [2024-03-11T23:15:10.088+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id d27320f5-5a2a-4f8f-ac8b-2a07f89773ff 23:16:46 policy-pap | [2024-03-11T23:15:10.089+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE 23:16:46 policy-pap | [2024-03-11T23:15:10.103+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:16:46 policy-pap | {"source":"pap-dd126ff4-bdda-4dfa-b8de-79b15c74268d","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"30caae45-2896-49b2-8943-7c3be2c45732","timestampMs":1710198910043,"name":"apex-427b0bc7-703d-4d13-b440-d7b93ca39961","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:46 policy-pap | [2024-03-11T23:15:10.103+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE 23:16:46 policy-pap | [2024-03-11T23:15:10.105+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:46 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"30caae45-2896-49b2-8943-7c3be2c45732","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"d130784a-2b9c-4a70-bb25-71f0f75a6427","timestampMs":1710198910088,"name":"apex-427b0bc7-703d-4d13-b440-d7b93ca39961","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:46 policy-pap | [2024-03-11T23:15:10.105+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-427b0bc7-703d-4d13-b440-d7b93ca39961 PdpUpdate stopping 23:16:46 policy-pap | [2024-03-11T23:15:10.105+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-427b0bc7-703d-4d13-b440-d7b93ca39961 PdpUpdate stopping enqueue 23:16:46 policy-pap | [2024-03-11T23:15:10.105+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-427b0bc7-703d-4d13-b440-d7b93ca39961 PdpUpdate stopping timer 23:16:46 policy-pap | [2024-03-11T23:15:10.105+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=30caae45-2896-49b2-8943-7c3be2c45732, expireMs=1710198940074] 23:16:46 policy-pap | [2024-03-11T23:15:10.105+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-427b0bc7-703d-4d13-b440-d7b93ca39961 PdpUpdate stopping listener 23:16:46 policy-pap | [2024-03-11T23:15:10.105+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-427b0bc7-703d-4d13-b440-d7b93ca39961 PdpUpdate stopped 23:16:46 policy-pap | [2024-03-11T23:15:10.111+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-427b0bc7-703d-4d13-b440-d7b93ca39961 PdpUpdate successful 23:16:46 policy-pap | [2024-03-11T23:15:10.111+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-427b0bc7-703d-4d13-b440-d7b93ca39961 has no more requests 23:16:46 policy-pap | [2024-03-11T23:15:10.123+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:16:46 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"30caae45-2896-49b2-8943-7c3be2c45732","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"d130784a-2b9c-4a70-bb25-71f0f75a6427","timestampMs":1710198910088,"name":"apex-427b0bc7-703d-4d13-b440-d7b93ca39961","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:46 policy-pap | [2024-03-11T23:15:10.124+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 30caae45-2896-49b2-8943-7c3be2c45732 23:16:46 policy-pap | [2024-03-11T23:15:16.121+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls 23:16:46 policy-pap | [2024-03-11T23:15:16.127+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls 23:16:46 policy-pap | [2024-03-11T23:15:16.544+00:00|INFO|SessionData|http-nio-6969-exec-7] unknown group testGroup 23:16:46 policy-pap | [2024-03-11T23:15:17.087+00:00|INFO|SessionData|http-nio-6969-exec-7] create cached group testGroup 23:16:46 policy-pap | [2024-03-11T23:15:17.088+00:00|INFO|SessionData|http-nio-6969-exec-7] creating DB group testGroup 23:16:46 policy-pap | [2024-03-11T23:15:17.571+00:00|INFO|SessionData|http-nio-6969-exec-9] cache group testGroup 23:16:46 policy-pap | [2024-03-11T23:15:17.803+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-9] Registering a deploy for policy onap.restart.tca 1.0.0 23:16:46 policy-pap | [2024-03-11T23:15:17.914+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-9] Registering a deploy for policy operational.apex.decisionMaker 1.0.0 23:16:46 policy-pap | [2024-03-11T23:15:17.914+00:00|INFO|SessionData|http-nio-6969-exec-9] update cached group testGroup 23:16:46 policy-pap | [2024-03-11T23:15:17.915+00:00|INFO|SessionData|http-nio-6969-exec-9] updating DB group testGroup 23:16:46 policy-pap | [2024-03-11T23:15:17.931+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-9] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeA, policy=onap.restart.tca 1.0.0, action=DEPLOYMENT, timestamp=2024-03-11T23:15:17Z, user=policyadmin), PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeC, policy=operational.apex.decisionMaker 1.0.0, action=DEPLOYMENT, timestamp=2024-03-11T23:15:17Z, user=policyadmin)] 23:16:46 policy-pap | [2024-03-11T23:15:18.641+00:00|INFO|SessionData|http-nio-6969-exec-5] cache group testGroup 23:16:46 policy-pap | [2024-03-11T23:15:18.642+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-5] remove policy onap.restart.tca 1.0.0 from subgroup testGroup pdpTypeA count=0 23:16:46 policy-pap | [2024-03-11T23:15:18.643+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-5] Registering an undeploy for policy onap.restart.tca 1.0.0 23:16:46 policy-pap | [2024-03-11T23:15:18.643+00:00|INFO|SessionData|http-nio-6969-exec-5] update cached group testGroup 23:16:46 policy-pap | [2024-03-11T23:15:18.643+00:00|INFO|SessionData|http-nio-6969-exec-5] updating DB group testGroup 23:16:46 policy-pap | [2024-03-11T23:15:18.671+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-5] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeA, policy=onap.restart.tca 1.0.0, action=UNDEPLOYMENT, timestamp=2024-03-11T23:15:18Z, user=policyadmin)] 23:16:46 policy-pap | [2024-03-11T23:15:19.014+00:00|INFO|SessionData|http-nio-6969-exec-2] cache group defaultGroup 23:16:46 policy-pap | [2024-03-11T23:15:19.014+00:00|INFO|SessionData|http-nio-6969-exec-2] cache group testGroup 23:16:46 policy-pap | [2024-03-11T23:15:19.014+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-2] remove policy operational.apex.decisionMaker 1.0.0 from subgroup testGroup pdpTypeC count=0 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:18.424134052Z level=info msg="Migration successfully executed" id="Remove index IDX_folder_parent_uid_org_id" duration=1.754285ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:18.4303644Z level=info msg="Executing migration" id="create anon_device table" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:18.431438712Z level=info msg="Migration successfully executed" id="create anon_device table" duration=1.072772ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:18.436382962Z level=info msg="Executing migration" id="add unique index anon_device.device_id" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:18.437874982Z level=info msg="Migration successfully executed" id="add unique index anon_device.device_id" duration=1.49158ms 23:16:46 kafka | [2024-03-11 23:14:48,951] INFO [Broker id=1] Leader __consumer_offsets-31 with topic id Some(pKZWw3Z-TCqlQNYpHAZfkA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,962] INFO [LogLoader partition=__consumer_offsets-46, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:46 kafka | [2024-03-11 23:14:48,963] INFO Created log for partition __consumer_offsets-46 in /var/lib/kafka/data/__consumer_offsets-46 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:46 kafka | [2024-03-11 23:14:48,963] INFO [Partition __consumer_offsets-46 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-46 (kafka.cluster.Partition) 23:16:46 kafka | [2024-03-11 23:14:48,964] INFO [Partition __consumer_offsets-46 broker=1] Log loaded for partition __consumer_offsets-46 with initial high watermark 0 (kafka.cluster.Partition) 23:16:46 kafka | [2024-03-11 23:14:48,964] INFO [Broker id=1] Leader __consumer_offsets-46 with topic id Some(pKZWw3Z-TCqlQNYpHAZfkA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,972] INFO [LogLoader partition=__consumer_offsets-1, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:46 kafka | [2024-03-11 23:14:48,973] INFO Created log for partition __consumer_offsets-1 in /var/lib/kafka/data/__consumer_offsets-1 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:46 kafka | [2024-03-11 23:14:48,973] INFO [Partition __consumer_offsets-1 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-1 (kafka.cluster.Partition) 23:16:46 kafka | [2024-03-11 23:14:48,973] INFO [Partition __consumer_offsets-1 broker=1] Log loaded for partition __consumer_offsets-1 with initial high watermark 0 (kafka.cluster.Partition) 23:16:46 kafka | [2024-03-11 23:14:48,973] INFO [Broker id=1] Leader __consumer_offsets-1 with topic id Some(pKZWw3Z-TCqlQNYpHAZfkA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,980] INFO [LogLoader partition=__consumer_offsets-16, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:46 kafka | [2024-03-11 23:14:48,980] INFO Created log for partition __consumer_offsets-16 in /var/lib/kafka/data/__consumer_offsets-16 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:46 kafka | [2024-03-11 23:14:48,980] INFO [Partition __consumer_offsets-16 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-16 (kafka.cluster.Partition) 23:16:46 kafka | [2024-03-11 23:14:48,980] INFO [Partition __consumer_offsets-16 broker=1] Log loaded for partition __consumer_offsets-16 with initial high watermark 0 (kafka.cluster.Partition) 23:16:46 kafka | [2024-03-11 23:14:48,980] INFO [Broker id=1] Leader __consumer_offsets-16 with topic id Some(pKZWw3Z-TCqlQNYpHAZfkA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,988] INFO [LogLoader partition=__consumer_offsets-2, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:46 kafka | [2024-03-11 23:14:48,989] INFO Created log for partition __consumer_offsets-2 in /var/lib/kafka/data/__consumer_offsets-2 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:46 kafka | [2024-03-11 23:14:48,989] INFO [Partition __consumer_offsets-2 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-2 (kafka.cluster.Partition) 23:16:46 kafka | [2024-03-11 23:14:48,989] INFO [Partition __consumer_offsets-2 broker=1] Log loaded for partition __consumer_offsets-2 with initial high watermark 0 (kafka.cluster.Partition) 23:16:46 kafka | [2024-03-11 23:14:48,989] INFO [Broker id=1] Leader __consumer_offsets-2 with topic id Some(pKZWw3Z-TCqlQNYpHAZfkA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:48,996] INFO [LogLoader partition=__consumer_offsets-25, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:46 kafka | [2024-03-11 23:14:48,996] INFO Created log for partition __consumer_offsets-25 in /var/lib/kafka/data/__consumer_offsets-25 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:46 kafka | [2024-03-11 23:14:48,996] INFO [Partition __consumer_offsets-25 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-25 (kafka.cluster.Partition) 23:16:46 kafka | [2024-03-11 23:14:48,996] INFO [Partition __consumer_offsets-25 broker=1] Log loaded for partition __consumer_offsets-25 with initial high watermark 0 (kafka.cluster.Partition) 23:16:46 kafka | [2024-03-11 23:14:48,996] INFO [Broker id=1] Leader __consumer_offsets-25 with topic id Some(pKZWw3Z-TCqlQNYpHAZfkA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:49,004] INFO [LogLoader partition=__consumer_offsets-40, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:46 kafka | [2024-03-11 23:14:49,004] INFO Created log for partition __consumer_offsets-40 in /var/lib/kafka/data/__consumer_offsets-40 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:46 kafka | [2024-03-11 23:14:49,004] INFO [Partition __consumer_offsets-40 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-40 (kafka.cluster.Partition) 23:16:46 kafka | [2024-03-11 23:14:49,004] INFO [Partition __consumer_offsets-40 broker=1] Log loaded for partition __consumer_offsets-40 with initial high watermark 0 (kafka.cluster.Partition) 23:16:46 kafka | [2024-03-11 23:14:49,004] INFO [Broker id=1] Leader __consumer_offsets-40 with topic id Some(pKZWw3Z-TCqlQNYpHAZfkA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:49,011] INFO [LogLoader partition=__consumer_offsets-47, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:46 kafka | [2024-03-11 23:14:49,011] INFO Created log for partition __consumer_offsets-47 in /var/lib/kafka/data/__consumer_offsets-47 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:46 kafka | [2024-03-11 23:14:49,011] INFO [Partition __consumer_offsets-47 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-47 (kafka.cluster.Partition) 23:16:46 kafka | [2024-03-11 23:14:49,011] INFO [Partition __consumer_offsets-47 broker=1] Log loaded for partition __consumer_offsets-47 with initial high watermark 0 (kafka.cluster.Partition) 23:16:46 kafka | [2024-03-11 23:14:49,011] INFO [Broker id=1] Leader __consumer_offsets-47 with topic id Some(pKZWw3Z-TCqlQNYpHAZfkA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:49,021] INFO [LogLoader partition=__consumer_offsets-17, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:46 kafka | [2024-03-11 23:14:49,022] INFO Created log for partition __consumer_offsets-17 in /var/lib/kafka/data/__consumer_offsets-17 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:46 kafka | [2024-03-11 23:14:49,022] INFO [Partition __consumer_offsets-17 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-17 (kafka.cluster.Partition) 23:16:46 kafka | [2024-03-11 23:14:49,022] INFO [Partition __consumer_offsets-17 broker=1] Log loaded for partition __consumer_offsets-17 with initial high watermark 0 (kafka.cluster.Partition) 23:16:46 kafka | [2024-03-11 23:14:49,022] INFO [Broker id=1] Leader __consumer_offsets-17 with topic id Some(pKZWw3Z-TCqlQNYpHAZfkA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:49,038] INFO [LogLoader partition=__consumer_offsets-32, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:46 kafka | [2024-03-11 23:14:49,039] INFO Created log for partition __consumer_offsets-32 in /var/lib/kafka/data/__consumer_offsets-32 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:46 kafka | [2024-03-11 23:14:49,039] INFO [Partition __consumer_offsets-32 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-32 (kafka.cluster.Partition) 23:16:46 kafka | [2024-03-11 23:14:49,039] INFO [Partition __consumer_offsets-32 broker=1] Log loaded for partition __consumer_offsets-32 with initial high watermark 0 (kafka.cluster.Partition) 23:16:46 kafka | [2024-03-11 23:14:49,039] INFO [Broker id=1] Leader __consumer_offsets-32 with topic id Some(pKZWw3Z-TCqlQNYpHAZfkA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:49,056] INFO [LogLoader partition=__consumer_offsets-37, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:46 kafka | [2024-03-11 23:14:49,056] INFO Created log for partition __consumer_offsets-37 in /var/lib/kafka/data/__consumer_offsets-37 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:46 kafka | [2024-03-11 23:14:49,056] INFO [Partition __consumer_offsets-37 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-37 (kafka.cluster.Partition) 23:16:46 kafka | [2024-03-11 23:14:49,056] INFO [Partition __consumer_offsets-37 broker=1] Log loaded for partition __consumer_offsets-37 with initial high watermark 0 (kafka.cluster.Partition) 23:16:46 kafka | [2024-03-11 23:14:49,057] INFO [Broker id=1] Leader __consumer_offsets-37 with topic id Some(pKZWw3Z-TCqlQNYpHAZfkA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:49,093] INFO [LogLoader partition=__consumer_offsets-7, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:46 kafka | [2024-03-11 23:14:49,094] INFO Created log for partition __consumer_offsets-7 in /var/lib/kafka/data/__consumer_offsets-7 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:46 kafka | [2024-03-11 23:14:49,094] INFO [Partition __consumer_offsets-7 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-7 (kafka.cluster.Partition) 23:16:46 kafka | [2024-03-11 23:14:49,094] INFO [Partition __consumer_offsets-7 broker=1] Log loaded for partition __consumer_offsets-7 with initial high watermark 0 (kafka.cluster.Partition) 23:16:46 kafka | [2024-03-11 23:14:49,095] INFO [Broker id=1] Leader __consumer_offsets-7 with topic id Some(pKZWw3Z-TCqlQNYpHAZfkA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:49,107] INFO [LogLoader partition=__consumer_offsets-22, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:46 kafka | [2024-03-11 23:14:49,108] INFO Created log for partition __consumer_offsets-22 in /var/lib/kafka/data/__consumer_offsets-22 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:46 kafka | [2024-03-11 23:14:49,108] INFO [Partition __consumer_offsets-22 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-22 (kafka.cluster.Partition) 23:16:46 kafka | [2024-03-11 23:14:49,108] INFO [Partition __consumer_offsets-22 broker=1] Log loaded for partition __consumer_offsets-22 with initial high watermark 0 (kafka.cluster.Partition) 23:16:46 kafka | [2024-03-11 23:14:49,108] INFO [Broker id=1] Leader __consumer_offsets-22 with topic id Some(pKZWw3Z-TCqlQNYpHAZfkA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:49,126] INFO [LogLoader partition=__consumer_offsets-29, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:46 kafka | [2024-03-11 23:14:49,127] INFO Created log for partition __consumer_offsets-29 in /var/lib/kafka/data/__consumer_offsets-29 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:46 kafka | [2024-03-11 23:14:49,127] INFO [Partition __consumer_offsets-29 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-29 (kafka.cluster.Partition) 23:16:46 kafka | [2024-03-11 23:14:49,127] INFO [Partition __consumer_offsets-29 broker=1] Log loaded for partition __consumer_offsets-29 with initial high watermark 0 (kafka.cluster.Partition) 23:16:46 kafka | [2024-03-11 23:14:49,128] INFO [Broker id=1] Leader __consumer_offsets-29 with topic id Some(pKZWw3Z-TCqlQNYpHAZfkA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:46 policy-db-migrator | 40 0490-pdpsubgroup_pdp.sql upgrade 0 0800 1103242314150800u 1 2024-03-11 23:14:17 23:16:46 policy-db-migrator | 41 0500-pdpsubgroup.sql upgrade 0 0800 1103242314150800u 1 2024-03-11 23:14:17 23:16:46 policy-db-migrator | 42 0510-toscacapabilityassignment.sql upgrade 0 0800 1103242314150800u 1 2024-03-11 23:14:17 23:16:46 policy-db-migrator | 43 0520-toscacapabilityassignments.sql upgrade 0 0800 1103242314150800u 1 2024-03-11 23:14:17 23:16:46 policy-db-migrator | 44 0530-toscacapabilityassignments_toscacapabilityassignment.sql upgrade 0 0800 1103242314150800u 1 2024-03-11 23:14:17 23:16:46 policy-db-migrator | 45 0540-toscacapabilitytype.sql upgrade 0 0800 1103242314150800u 1 2024-03-11 23:14:17 23:16:46 policy-db-migrator | 46 0550-toscacapabilitytypes.sql upgrade 0 0800 1103242314150800u 1 2024-03-11 23:14:17 23:16:46 policy-db-migrator | 47 0560-toscacapabilitytypes_toscacapabilitytype.sql upgrade 0 0800 1103242314150800u 1 2024-03-11 23:14:17 23:16:46 policy-db-migrator | 48 0570-toscadatatype.sql upgrade 0 0800 1103242314150800u 1 2024-03-11 23:14:17 23:16:46 policy-db-migrator | 49 0580-toscadatatypes.sql upgrade 0 0800 1103242314150800u 1 2024-03-11 23:14:17 23:16:46 policy-db-migrator | 50 0590-toscadatatypes_toscadatatype.sql upgrade 0 0800 1103242314150800u 1 2024-03-11 23:14:17 23:16:46 policy-db-migrator | 51 0600-toscanodetemplate.sql upgrade 0 0800 1103242314150800u 1 2024-03-11 23:14:17 23:16:46 policy-db-migrator | 52 0610-toscanodetemplates.sql upgrade 0 0800 1103242314150800u 1 2024-03-11 23:14:17 23:16:46 policy-db-migrator | 53 0620-toscanodetemplates_toscanodetemplate.sql upgrade 0 0800 1103242314150800u 1 2024-03-11 23:14:17 23:16:46 policy-db-migrator | 54 0630-toscanodetype.sql upgrade 0 0800 1103242314150800u 1 2024-03-11 23:14:18 23:16:46 policy-db-migrator | 55 0640-toscanodetypes.sql upgrade 0 0800 1103242314150800u 1 2024-03-11 23:14:18 23:16:46 policy-db-migrator | 56 0650-toscanodetypes_toscanodetype.sql upgrade 0 0800 1103242314150800u 1 2024-03-11 23:14:18 23:16:46 policy-db-migrator | 57 0660-toscaparameter.sql upgrade 0 0800 1103242314150800u 1 2024-03-11 23:14:18 23:16:46 policy-db-migrator | 58 0670-toscapolicies.sql upgrade 0 0800 1103242314150800u 1 2024-03-11 23:14:18 23:16:46 policy-db-migrator | 59 0680-toscapolicies_toscapolicy.sql upgrade 0 0800 1103242314150800u 1 2024-03-11 23:14:18 23:16:46 policy-db-migrator | 60 0690-toscapolicy.sql upgrade 0 0800 1103242314150800u 1 2024-03-11 23:14:18 23:16:46 policy-db-migrator | 61 0700-toscapolicytype.sql upgrade 0 0800 1103242314150800u 1 2024-03-11 23:14:18 23:16:46 policy-db-migrator | 62 0710-toscapolicytypes.sql upgrade 0 0800 1103242314150800u 1 2024-03-11 23:14:18 23:16:46 policy-db-migrator | 63 0720-toscapolicytypes_toscapolicytype.sql upgrade 0 0800 1103242314150800u 1 2024-03-11 23:14:18 23:16:46 policy-db-migrator | 64 0730-toscaproperty.sql upgrade 0 0800 1103242314150800u 1 2024-03-11 23:14:18 23:16:46 policy-db-migrator | 65 0740-toscarelationshiptype.sql upgrade 0 0800 1103242314150800u 1 2024-03-11 23:14:18 23:16:46 policy-db-migrator | 66 0750-toscarelationshiptypes.sql upgrade 0 0800 1103242314150800u 1 2024-03-11 23:14:18 23:16:46 policy-db-migrator | 67 0760-toscarelationshiptypes_toscarelationshiptype.sql upgrade 0 0800 1103242314150800u 1 2024-03-11 23:14:18 23:16:46 policy-db-migrator | 68 0770-toscarequirement.sql upgrade 0 0800 1103242314150800u 1 2024-03-11 23:14:18 23:16:46 policy-db-migrator | 69 0780-toscarequirements.sql upgrade 0 0800 1103242314150800u 1 2024-03-11 23:14:18 23:16:46 policy-db-migrator | 70 0790-toscarequirements_toscarequirement.sql upgrade 0 0800 1103242314150800u 1 2024-03-11 23:14:18 23:16:46 policy-db-migrator | 71 0800-toscaservicetemplate.sql upgrade 0 0800 1103242314150800u 1 2024-03-11 23:14:18 23:16:46 policy-db-migrator | 72 0810-toscatopologytemplate.sql upgrade 0 0800 1103242314150800u 1 2024-03-11 23:14:18 23:16:46 policy-db-migrator | 73 0820-toscatrigger.sql upgrade 0 0800 1103242314150800u 1 2024-03-11 23:14:18 23:16:46 policy-db-migrator | 74 0830-FK_ToscaNodeTemplate_capabilitiesName.sql upgrade 0 0800 1103242314150800u 1 2024-03-11 23:14:18 23:16:46 policy-db-migrator | 75 0840-FK_ToscaNodeTemplate_requirementsName.sql upgrade 0 0800 1103242314150800u 1 2024-03-11 23:14:18 23:16:46 policy-db-migrator | 76 0850-FK_ToscaNodeType_requirementsName.sql upgrade 0 0800 1103242314150800u 1 2024-03-11 23:14:18 23:16:46 policy-db-migrator | 77 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql upgrade 0 0800 1103242314150800u 1 2024-03-11 23:14:19 23:16:46 policy-db-migrator | 78 0870-FK_ToscaServiceTemplate_dataTypesName.sql upgrade 0 0800 1103242314150800u 1 2024-03-11 23:14:19 23:16:46 policy-db-migrator | 79 0880-FK_ToscaServiceTemplate_nodeTypesName.sql upgrade 0 0800 1103242314150800u 1 2024-03-11 23:14:19 23:16:46 policy-db-migrator | 80 0890-FK_ToscaServiceTemplate_policyTypesName.sql upgrade 0 0800 1103242314150800u 1 2024-03-11 23:14:19 23:16:46 policy-db-migrator | 81 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql upgrade 0 0800 1103242314150800u 1 2024-03-11 23:14:19 23:16:46 policy-db-migrator | 82 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql upgrade 0 0800 1103242314150800u 1 2024-03-11 23:14:19 23:16:46 policy-db-migrator | 83 0920-FK_ToscaTopologyTemplate_policyName.sql upgrade 0 0800 1103242314150800u 1 2024-03-11 23:14:19 23:16:46 policy-db-migrator | 84 0940-PdpPolicyStatus_PdpGroup.sql upgrade 0 0800 1103242314150800u 1 2024-03-11 23:14:19 23:16:46 policy-db-migrator | 85 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql upgrade 0 0800 1103242314150800u 1 2024-03-11 23:14:19 23:16:46 policy-db-migrator | 86 0960-FK_ToscaNodeTemplate_capabilitiesName.sql upgrade 0 0800 1103242314150800u 1 2024-03-11 23:14:19 23:16:46 policy-db-migrator | 87 0970-FK_ToscaNodeTemplate_requirementsName.sql upgrade 0 0800 1103242314150800u 1 2024-03-11 23:14:19 23:16:46 policy-db-migrator | 88 0980-FK_ToscaNodeType_requirementsName.sql upgrade 0 0800 1103242314150800u 1 2024-03-11 23:14:19 23:16:46 policy-db-migrator | 89 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql upgrade 0 0800 1103242314150800u 1 2024-03-11 23:14:19 23:16:46 policy-db-migrator | 90 1000-FK_ToscaServiceTemplate_dataTypesName.sql upgrade 0 0800 1103242314150800u 1 2024-03-11 23:14:19 23:16:46 policy-db-migrator | 91 1010-FK_ToscaServiceTemplate_nodeTypesName.sql upgrade 0 0800 1103242314150800u 1 2024-03-11 23:14:19 23:16:46 policy-db-migrator | 92 1020-FK_ToscaServiceTemplate_policyTypesName.sql upgrade 0 0800 1103242314150800u 1 2024-03-11 23:14:19 23:16:46 policy-db-migrator | 93 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql upgrade 0 0800 1103242314150800u 1 2024-03-11 23:14:19 23:16:46 policy-db-migrator | 94 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql upgrade 0 0800 1103242314150800u 1 2024-03-11 23:14:19 23:16:46 policy-db-migrator | 95 1050-FK_ToscaTopologyTemplate_policyName.sql upgrade 0 0800 1103242314150800u 1 2024-03-11 23:14:20 23:16:46 policy-db-migrator | 96 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql upgrade 0 0800 1103242314150800u 1 2024-03-11 23:14:20 23:16:46 policy-db-migrator | 97 0100-pdp.sql upgrade 0800 0900 1103242314150900u 1 2024-03-11 23:14:20 23:16:46 policy-db-migrator | 98 0110-idx_tsidx1.sql upgrade 0800 0900 1103242314150900u 1 2024-03-11 23:14:20 23:16:46 policy-db-migrator | 99 0120-pk_pdpstatistics.sql upgrade 0800 0900 1103242314150900u 1 2024-03-11 23:14:20 23:16:46 policy-db-migrator | 100 0130-pdpstatistics.sql upgrade 0800 0900 1103242314150900u 1 2024-03-11 23:14:20 23:16:46 policy-db-migrator | 101 0140-pk_pdpstatistics.sql upgrade 0800 0900 1103242314150900u 1 2024-03-11 23:14:20 23:16:46 policy-db-migrator | 102 0150-pdpstatistics.sql upgrade 0800 0900 1103242314150900u 1 2024-03-11 23:14:20 23:16:46 policy-db-migrator | 103 0160-jpapdpstatistics_enginestats.sql upgrade 0800 0900 1103242314150900u 1 2024-03-11 23:14:20 23:16:46 policy-db-migrator | 104 0170-jpapdpstatistics_enginestats.sql upgrade 0800 0900 1103242314150900u 1 2024-03-11 23:14:20 23:16:46 policy-db-migrator | 105 0180-jpapdpstatistics_enginestats.sql upgrade 0800 0900 1103242314150900u 1 2024-03-11 23:14:20 23:16:46 policy-db-migrator | 106 0190-jpapolicyaudit.sql upgrade 0800 0900 1103242314150900u 1 2024-03-11 23:14:20 23:16:46 policy-db-migrator | 107 0200-JpaPolicyAuditIndex_timestamp.sql upgrade 0800 0900 1103242314150900u 1 2024-03-11 23:14:20 23:16:46 policy-db-migrator | 108 0210-sequence.sql upgrade 0800 0900 1103242314150900u 1 2024-03-11 23:14:20 23:16:46 policy-db-migrator | 109 0220-sequence.sql upgrade 0800 0900 1103242314150900u 1 2024-03-11 23:14:20 23:16:46 policy-db-migrator | 110 0100-jpatoscapolicy_targets.sql upgrade 0900 1000 1103242314151000u 1 2024-03-11 23:14:20 23:16:46 policy-db-migrator | 111 0110-jpatoscapolicytype_targets.sql upgrade 0900 1000 1103242314151000u 1 2024-03-11 23:14:20 23:16:46 policy-db-migrator | 112 0120-toscatrigger.sql upgrade 0900 1000 1103242314151000u 1 2024-03-11 23:14:20 23:16:46 policy-db-migrator | 113 0130-jpatoscapolicytype_triggers.sql upgrade 0900 1000 1103242314151000u 1 2024-03-11 23:14:20 23:16:46 policy-db-migrator | 114 0140-toscaparameter.sql upgrade 0900 1000 1103242314151000u 1 2024-03-11 23:14:20 23:16:46 policy-db-migrator | 115 0150-toscaproperty.sql upgrade 0900 1000 1103242314151000u 1 2024-03-11 23:14:20 23:16:46 policy-db-migrator | 116 0160-jpapolicyaudit_pk.sql upgrade 0900 1000 1103242314151000u 1 2024-03-11 23:14:21 23:16:46 policy-pap | [2024-03-11T23:15:19.015+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-2] Registering an undeploy for policy operational.apex.decisionMaker 1.0.0 23:16:46 policy-pap | [2024-03-11T23:15:19.015+00:00|INFO|SessionData|http-nio-6969-exec-2] update cached group testGroup 23:16:46 policy-pap | [2024-03-11T23:15:19.015+00:00|INFO|SessionData|http-nio-6969-exec-2] updating DB group testGroup 23:16:46 policy-pap | [2024-03-11T23:15:19.043+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-2] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeC, policy=operational.apex.decisionMaker 1.0.0, action=UNDEPLOYMENT, timestamp=2024-03-11T23:15:19Z, user=policyadmin)] 23:16:46 policy-pap | [2024-03-11T23:15:39.636+00:00|INFO|SessionData|http-nio-6969-exec-9] cache group testGroup 23:16:46 policy-pap | [2024-03-11T23:15:39.639+00:00|INFO|SessionData|http-nio-6969-exec-9] deleting DB group testGroup 23:16:46 policy-pap | [2024-03-11T23:15:39.938+00:00|INFO|TimerManager|Thread-9] update timer discarded (expired) Timer [name=d8098987-1a06-4b4b-bcac-21f90c18f0d0, expireMs=1710198939938] 23:16:46 policy-pap | [2024-03-11T23:15:40.012+00:00|INFO|TimerManager|Thread-10] state-change timer discarded (expired) Timer [name=d27320f5-5a2a-4f8f-ac8b-2a07f89773ff, expireMs=1710198940011] 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:18.441568848Z level=info msg="Executing migration" id="add index anon_device.updated_at" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:18.442720461Z level=info msg="Migration successfully executed" id="add index anon_device.updated_at" duration=1.150843ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:18.446357985Z level=info msg="Executing migration" id="create signing_key table" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:18.447333995Z level=info msg="Migration successfully executed" id="create signing_key table" duration=974.83µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:18.452371808Z level=info msg="Executing migration" id="add unique index signing_key.key_id" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:18.454331177Z level=info msg="Migration successfully executed" id="add unique index signing_key.key_id" duration=1.956729ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:18.458450761Z level=info msg="Executing migration" id="set legacy alert migration status in kvstore" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:18.459943841Z level=info msg="Migration successfully executed" id="set legacy alert migration status in kvstore" duration=1.50344ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:18.464545965Z level=info msg="Executing migration" id="migrate record of created folders during legacy migration to kvstore" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:18.465133997Z level=info msg="Migration successfully executed" id="migrate record of created folders during legacy migration to kvstore" duration=588.642µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:18.471635129Z level=info msg="Executing migration" id="Add folder_uid for dashboard" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:18.483331247Z level=info msg="Migration successfully executed" id="Add folder_uid for dashboard" duration=11.697298ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:18.487784527Z level=info msg="Executing migration" id="Populate dashboard folder_uid column" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:18.488317108Z level=info msg="Migration successfully executed" id="Populate dashboard folder_uid column" duration=540.821µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:18.492028764Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:18.492864841Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title" duration=835.407µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:18.497386403Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_id_title" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:18.499175669Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_id_title" duration=1.789866ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:18.503026888Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_uid_title" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:18.504536238Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_uid_title" duration=1.51222ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:18.508246144Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:18.509573661Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" duration=1.327367ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:18.516202655Z level=info msg="Executing migration" id="Restore index for dashboard_org_id_folder_id_title" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:18.517461511Z level=info msg="Migration successfully executed" id="Restore index for dashboard_org_id_folder_id_title" duration=1.258646ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:18.521265258Z level=info msg="Executing migration" id="create sso_setting table" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:18.523195668Z level=info msg="Migration successfully executed" id="create sso_setting table" duration=1.92954ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:18.529628068Z level=info msg="Executing migration" id="copy kvstore migration status to each org" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:18.530434164Z level=info msg="Migration successfully executed" id="copy kvstore migration status to each org" duration=815.556µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:18.536521969Z level=info msg="Executing migration" id="add back entry for orgid=0 migrated status" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:18.536855435Z level=info msg="Migration successfully executed" id="add back entry for orgid=0 migrated status" duration=333.926µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:18.540839036Z level=info msg="Executing migration" id="alter kv_store.value to longtext" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:18.541124942Z level=info msg="Migration successfully executed" id="alter kv_store.value to longtext" duration=285.896µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:18.546964101Z level=info msg="Executing migration" id="add notification_settings column to alert_rule table" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:18.558353582Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule table" duration=11.386981ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:18.563220742Z level=info msg="Executing migration" id="add notification_settings column to alert_rule_version table" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:18.573014901Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule_version table" duration=9.79603ms 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:18.577943721Z level=info msg="Executing migration" id="removing scope from alert.instances:read action migration" 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:18.578315929Z level=info msg="Migration successfully executed" id="removing scope from alert.instances:read action migration" duration=371.718µs 23:16:46 grafana | logger=migrator t=2024-03-11T23:14:18.585204789Z level=info msg="migrations completed" performed=547 skipped=0 duration=4.567429333s 23:16:46 grafana | logger=sqlstore t=2024-03-11T23:14:18.598194223Z level=info msg="Created default admin" user=admin 23:16:46 grafana | logger=sqlstore t=2024-03-11T23:14:18.59854812Z level=info msg="Created default organization" 23:16:46 grafana | logger=secrets t=2024-03-11T23:14:18.603767997Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1 23:16:46 grafana | logger=plugin.store t=2024-03-11T23:14:18.627950508Z level=info msg="Loading plugins..." 23:16:46 grafana | logger=local.finder t=2024-03-11T23:14:18.681610709Z level=warn msg="Skipping finding plugins as directory does not exist" path=/usr/share/grafana/plugins-bundled 23:16:46 grafana | logger=plugin.store t=2024-03-11T23:14:18.681645429Z level=info msg="Plugins loaded" count=55 duration=53.696311ms 23:16:46 grafana | logger=query_data t=2024-03-11T23:14:18.68462394Z level=info msg="Query Service initialization" 23:16:46 grafana | logger=live.push_http t=2024-03-11T23:14:18.696403451Z level=info msg="Live Push Gateway initialization" 23:16:46 grafana | logger=ngalert.migration t=2024-03-11T23:14:18.703509644Z level=info msg=Starting 23:16:46 grafana | logger=ngalert.migration t=2024-03-11T23:14:18.703940913Z level=info msg="Applying transition" currentType=Legacy desiredType=UnifiedAlerting cleanOnDowngrade=false cleanOnUpgrade=false 23:16:46 grafana | logger=ngalert.migration orgID=1 t=2024-03-11T23:14:18.704519826Z level=info msg="Migrating alerts for organisation" 23:16:46 grafana | logger=ngalert.migration orgID=1 t=2024-03-11T23:14:18.705162068Z level=info msg="Alerts found to migrate" alerts=0 23:16:46 grafana | logger=ngalert.migration t=2024-03-11T23:14:18.70671894Z level=info msg="Completed alerting migration" 23:16:46 grafana | logger=ngalert.state.manager t=2024-03-11T23:14:18.74069769Z level=info msg="Running in alternative execution of Error/NoData mode" 23:16:46 grafana | logger=infra.usagestats.collector t=2024-03-11T23:14:18.743316214Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2 23:16:46 grafana | logger=provisioning.datasources t=2024-03-11T23:14:18.745507189Z level=info msg="inserting datasource from configuration" name=PolicyPrometheus uid=dkSf71fnz 23:16:46 grafana | logger=provisioning.alerting t=2024-03-11T23:14:18.800173911Z level=info msg="starting to provision alerting" 23:16:46 grafana | logger=provisioning.alerting t=2024-03-11T23:14:18.800217122Z level=info msg="finished to provision alerting" 23:16:46 grafana | logger=grafanaStorageLogger t=2024-03-11T23:14:18.800541488Z level=info msg="Storage starting" 23:16:46 grafana | logger=ngalert.state.manager t=2024-03-11T23:14:18.801027698Z level=info msg="Warming state cache for startup" 23:16:46 grafana | logger=ngalert.multiorg.alertmanager t=2024-03-11T23:14:18.80407947Z level=info msg="Starting MultiOrg Alertmanager" 23:16:46 grafana | logger=http.server t=2024-03-11T23:14:18.807220514Z level=info msg="HTTP Server Listen" address=[::]:3000 protocol=http subUrl= socket= 23:16:46 grafana | logger=ngalert.state.manager t=2024-03-11T23:14:18.808526261Z level=info msg="State cache has been initialized" states=0 duration=6.641214ms 23:16:46 grafana | logger=ngalert.scheduler t=2024-03-11T23:14:18.809351697Z level=info msg="Starting scheduler" tickInterval=10s maxAttempts=1 23:16:46 grafana | logger=ticker t=2024-03-11T23:14:18.809561431Z level=info msg=starting first_tick=2024-03-11T23:14:20Z 23:16:46 grafana | logger=provisioning.dashboard t=2024-03-11T23:14:18.862635011Z level=info msg="starting to provision dashboards" 23:16:46 grafana | logger=plugins.update.checker t=2024-03-11T23:14:18.901320408Z level=info msg="Update check succeeded" duration=96.902012ms 23:16:46 grafana | logger=grafana.update.checker t=2024-03-11T23:14:18.945363664Z level=info msg="Update check succeeded" duration=143.275945ms 23:16:46 grafana | logger=sqlstore.transactions t=2024-03-11T23:14:18.98747184Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked" 23:16:46 grafana | logger=sqlstore.transactions t=2024-03-11T23:14:19.006871725Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=1 code="database is locked" 23:16:46 grafana | logger=sqlstore.transactions t=2024-03-11T23:14:19.019410449Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=2 code="database is locked" 23:16:46 grafana | logger=grafana-apiserver t=2024-03-11T23:14:19.064232631Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager" 23:16:46 grafana | logger=grafana-apiserver t=2024-03-11T23:14:19.066049649Z level=info msg="Adding GroupVersion featuretoggle.grafana.app v0alpha1 to ResourceManager" 23:16:46 grafana | logger=provisioning.dashboard t=2024-03-11T23:14:19.181393206Z level=info msg="finished to provision dashboards" 23:16:46 grafana | logger=infra.usagestats t=2024-03-11T23:16:01.817433105Z level=info msg="Usage stats are ready to report" 23:16:46 policy-db-migrator | 117 0170-pdpstatistics_pk.sql upgrade 0900 1000 1103242314151000u 1 2024-03-11 23:14:21 23:16:46 policy-db-migrator | 118 0180-jpatoscanodetemplate_metadata.sql upgrade 0900 1000 1103242314151000u 1 2024-03-11 23:14:21 23:16:46 policy-db-migrator | 119 0100-upgrade.sql upgrade 1000 1100 1103242314151100u 1 2024-03-11 23:14:21 23:16:46 policy-db-migrator | 120 0100-jpapolicyaudit_renameuser.sql upgrade 1100 1200 1103242314151200u 1 2024-03-11 23:14:21 23:16:46 policy-db-migrator | 121 0110-idx_tsidx1.sql upgrade 1100 1200 1103242314151200u 1 2024-03-11 23:14:21 23:16:46 policy-db-migrator | 122 0120-audit_sequence.sql upgrade 1100 1200 1103242314151200u 1 2024-03-11 23:14:21 23:16:46 policy-db-migrator | 123 0130-statistics_sequence.sql upgrade 1100 1200 1103242314151200u 1 2024-03-11 23:14:21 23:16:46 policy-db-migrator | 124 0100-pdpstatistics.sql upgrade 1200 1300 1103242314151300u 1 2024-03-11 23:14:21 23:16:46 policy-db-migrator | 125 0110-jpapdpstatistics_enginestats.sql upgrade 1200 1300 1103242314151300u 1 2024-03-11 23:14:21 23:16:46 policy-db-migrator | 126 0120-statistics_sequence.sql upgrade 1200 1300 1103242314151300u 1 2024-03-11 23:14:21 23:16:46 policy-db-migrator | policyadmin: OK @ 1300 23:16:46 kafka | [2024-03-11 23:14:49,143] INFO [LogLoader partition=__consumer_offsets-44, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:46 kafka | [2024-03-11 23:14:49,143] INFO Created log for partition __consumer_offsets-44 in /var/lib/kafka/data/__consumer_offsets-44 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:46 kafka | [2024-03-11 23:14:49,144] INFO [Partition __consumer_offsets-44 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-44 (kafka.cluster.Partition) 23:16:46 kafka | [2024-03-11 23:14:49,144] INFO [Partition __consumer_offsets-44 broker=1] Log loaded for partition __consumer_offsets-44 with initial high watermark 0 (kafka.cluster.Partition) 23:16:46 kafka | [2024-03-11 23:14:49,144] INFO [Broker id=1] Leader __consumer_offsets-44 with topic id Some(pKZWw3Z-TCqlQNYpHAZfkA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:49,151] INFO [LogLoader partition=__consumer_offsets-14, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:46 kafka | [2024-03-11 23:14:49,152] INFO Created log for partition __consumer_offsets-14 in /var/lib/kafka/data/__consumer_offsets-14 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:46 kafka | [2024-03-11 23:14:49,152] INFO [Partition __consumer_offsets-14 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-14 (kafka.cluster.Partition) 23:16:46 kafka | [2024-03-11 23:14:49,152] INFO [Partition __consumer_offsets-14 broker=1] Log loaded for partition __consumer_offsets-14 with initial high watermark 0 (kafka.cluster.Partition) 23:16:46 kafka | [2024-03-11 23:14:49,152] INFO [Broker id=1] Leader __consumer_offsets-14 with topic id Some(pKZWw3Z-TCqlQNYpHAZfkA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:49,161] INFO [LogLoader partition=__consumer_offsets-23, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:46 kafka | [2024-03-11 23:14:49,161] INFO Created log for partition __consumer_offsets-23 in /var/lib/kafka/data/__consumer_offsets-23 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:46 kafka | [2024-03-11 23:14:49,161] INFO [Partition __consumer_offsets-23 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-23 (kafka.cluster.Partition) 23:16:46 kafka | [2024-03-11 23:14:49,161] INFO [Partition __consumer_offsets-23 broker=1] Log loaded for partition __consumer_offsets-23 with initial high watermark 0 (kafka.cluster.Partition) 23:16:46 kafka | [2024-03-11 23:14:49,161] INFO [Broker id=1] Leader __consumer_offsets-23 with topic id Some(pKZWw3Z-TCqlQNYpHAZfkA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:49,170] INFO [LogLoader partition=__consumer_offsets-38, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:46 kafka | [2024-03-11 23:14:49,171] INFO Created log for partition __consumer_offsets-38 in /var/lib/kafka/data/__consumer_offsets-38 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:46 kafka | [2024-03-11 23:14:49,171] INFO [Partition __consumer_offsets-38 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-38 (kafka.cluster.Partition) 23:16:46 kafka | [2024-03-11 23:14:49,171] INFO [Partition __consumer_offsets-38 broker=1] Log loaded for partition __consumer_offsets-38 with initial high watermark 0 (kafka.cluster.Partition) 23:16:46 kafka | [2024-03-11 23:14:49,171] INFO [Broker id=1] Leader __consumer_offsets-38 with topic id Some(pKZWw3Z-TCqlQNYpHAZfkA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:49,183] INFO [LogLoader partition=__consumer_offsets-8, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:46 kafka | [2024-03-11 23:14:49,185] INFO Created log for partition __consumer_offsets-8 in /var/lib/kafka/data/__consumer_offsets-8 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:46 kafka | [2024-03-11 23:14:49,185] INFO [Partition __consumer_offsets-8 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-8 (kafka.cluster.Partition) 23:16:46 kafka | [2024-03-11 23:14:49,185] INFO [Partition __consumer_offsets-8 broker=1] Log loaded for partition __consumer_offsets-8 with initial high watermark 0 (kafka.cluster.Partition) 23:16:46 kafka | [2024-03-11 23:14:49,185] INFO [Broker id=1] Leader __consumer_offsets-8 with topic id Some(pKZWw3Z-TCqlQNYpHAZfkA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:49,194] INFO [LogLoader partition=policy-pdp-pap-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:46 kafka | [2024-03-11 23:14:49,194] INFO Created log for partition policy-pdp-pap-0 in /var/lib/kafka/data/policy-pdp-pap-0 with properties {} (kafka.log.LogManager) 23:16:46 kafka | [2024-03-11 23:14:49,194] INFO [Partition policy-pdp-pap-0 broker=1] No checkpointed highwatermark is found for partition policy-pdp-pap-0 (kafka.cluster.Partition) 23:16:46 kafka | [2024-03-11 23:14:49,194] INFO [Partition policy-pdp-pap-0 broker=1] Log loaded for partition policy-pdp-pap-0 with initial high watermark 0 (kafka.cluster.Partition) 23:16:46 kafka | [2024-03-11 23:14:49,194] INFO [Broker id=1] Leader policy-pdp-pap-0 with topic id Some(MS889OzURY6CvMQUxdh95w) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:49,202] INFO [LogLoader partition=__consumer_offsets-45, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:46 kafka | [2024-03-11 23:14:49,203] INFO Created log for partition __consumer_offsets-45 in /var/lib/kafka/data/__consumer_offsets-45 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:46 kafka | [2024-03-11 23:14:49,203] INFO [Partition __consumer_offsets-45 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-45 (kafka.cluster.Partition) 23:16:46 kafka | [2024-03-11 23:14:49,203] INFO [Partition __consumer_offsets-45 broker=1] Log loaded for partition __consumer_offsets-45 with initial high watermark 0 (kafka.cluster.Partition) 23:16:46 kafka | [2024-03-11 23:14:49,203] INFO [Broker id=1] Leader __consumer_offsets-45 with topic id Some(pKZWw3Z-TCqlQNYpHAZfkA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:49,211] INFO [LogLoader partition=__consumer_offsets-15, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:46 kafka | [2024-03-11 23:14:49,212] INFO Created log for partition __consumer_offsets-15 in /var/lib/kafka/data/__consumer_offsets-15 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:46 kafka | [2024-03-11 23:14:49,212] INFO [Partition __consumer_offsets-15 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-15 (kafka.cluster.Partition) 23:16:46 kafka | [2024-03-11 23:14:49,212] INFO [Partition __consumer_offsets-15 broker=1] Log loaded for partition __consumer_offsets-15 with initial high watermark 0 (kafka.cluster.Partition) 23:16:46 kafka | [2024-03-11 23:14:49,212] INFO [Broker id=1] Leader __consumer_offsets-15 with topic id Some(pKZWw3Z-TCqlQNYpHAZfkA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:49,222] INFO [LogLoader partition=__consumer_offsets-30, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:46 kafka | [2024-03-11 23:14:49,222] INFO Created log for partition __consumer_offsets-30 in /var/lib/kafka/data/__consumer_offsets-30 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:46 kafka | [2024-03-11 23:14:49,222] INFO [Partition __consumer_offsets-30 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-30 (kafka.cluster.Partition) 23:16:46 kafka | [2024-03-11 23:14:49,222] INFO [Partition __consumer_offsets-30 broker=1] Log loaded for partition __consumer_offsets-30 with initial high watermark 0 (kafka.cluster.Partition) 23:16:46 kafka | [2024-03-11 23:14:49,222] INFO [Broker id=1] Leader __consumer_offsets-30 with topic id Some(pKZWw3Z-TCqlQNYpHAZfkA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:49,229] INFO [LogLoader partition=__consumer_offsets-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:46 kafka | [2024-03-11 23:14:49,230] INFO Created log for partition __consumer_offsets-0 in /var/lib/kafka/data/__consumer_offsets-0 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:46 kafka | [2024-03-11 23:14:49,230] INFO [Partition __consumer_offsets-0 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-0 (kafka.cluster.Partition) 23:16:46 kafka | [2024-03-11 23:14:49,230] INFO [Partition __consumer_offsets-0 broker=1] Log loaded for partition __consumer_offsets-0 with initial high watermark 0 (kafka.cluster.Partition) 23:16:46 kafka | [2024-03-11 23:14:49,230] INFO [Broker id=1] Leader __consumer_offsets-0 with topic id Some(pKZWw3Z-TCqlQNYpHAZfkA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:49,236] INFO [LogLoader partition=__consumer_offsets-35, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:46 kafka | [2024-03-11 23:14:49,240] INFO Created log for partition __consumer_offsets-35 in /var/lib/kafka/data/__consumer_offsets-35 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:46 kafka | [2024-03-11 23:14:49,240] INFO [Partition __consumer_offsets-35 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-35 (kafka.cluster.Partition) 23:16:46 kafka | [2024-03-11 23:14:49,240] INFO [Partition __consumer_offsets-35 broker=1] Log loaded for partition __consumer_offsets-35 with initial high watermark 0 (kafka.cluster.Partition) 23:16:46 kafka | [2024-03-11 23:14:49,240] INFO [Broker id=1] Leader __consumer_offsets-35 with topic id Some(pKZWw3Z-TCqlQNYpHAZfkA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:49,246] INFO [LogLoader partition=__consumer_offsets-5, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:46 kafka | [2024-03-11 23:14:49,246] INFO Created log for partition __consumer_offsets-5 in /var/lib/kafka/data/__consumer_offsets-5 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:46 kafka | [2024-03-11 23:14:49,246] INFO [Partition __consumer_offsets-5 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-5 (kafka.cluster.Partition) 23:16:46 kafka | [2024-03-11 23:14:49,246] INFO [Partition __consumer_offsets-5 broker=1] Log loaded for partition __consumer_offsets-5 with initial high watermark 0 (kafka.cluster.Partition) 23:16:46 kafka | [2024-03-11 23:14:49,246] INFO [Broker id=1] Leader __consumer_offsets-5 with topic id Some(pKZWw3Z-TCqlQNYpHAZfkA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:49,261] INFO [LogLoader partition=__consumer_offsets-20, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:46 kafka | [2024-03-11 23:14:49,261] INFO Created log for partition __consumer_offsets-20 in /var/lib/kafka/data/__consumer_offsets-20 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:46 kafka | [2024-03-11 23:14:49,261] INFO [Partition __consumer_offsets-20 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-20 (kafka.cluster.Partition) 23:16:46 kafka | [2024-03-11 23:14:49,261] INFO [Partition __consumer_offsets-20 broker=1] Log loaded for partition __consumer_offsets-20 with initial high watermark 0 (kafka.cluster.Partition) 23:16:46 kafka | [2024-03-11 23:14:49,261] INFO [Broker id=1] Leader __consumer_offsets-20 with topic id Some(pKZWw3Z-TCqlQNYpHAZfkA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:49,273] INFO [LogLoader partition=__consumer_offsets-27, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:46 kafka | [2024-03-11 23:14:49,274] INFO Created log for partition __consumer_offsets-27 in /var/lib/kafka/data/__consumer_offsets-27 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:46 kafka | [2024-03-11 23:14:49,274] INFO [Partition __consumer_offsets-27 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-27 (kafka.cluster.Partition) 23:16:46 kafka | [2024-03-11 23:14:49,274] INFO [Partition __consumer_offsets-27 broker=1] Log loaded for partition __consumer_offsets-27 with initial high watermark 0 (kafka.cluster.Partition) 23:16:46 kafka | [2024-03-11 23:14:49,274] INFO [Broker id=1] Leader __consumer_offsets-27 with topic id Some(pKZWw3Z-TCqlQNYpHAZfkA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:49,279] INFO [LogLoader partition=__consumer_offsets-42, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:46 kafka | [2024-03-11 23:14:49,280] INFO Created log for partition __consumer_offsets-42 in /var/lib/kafka/data/__consumer_offsets-42 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:46 kafka | [2024-03-11 23:14:49,280] INFO [Partition __consumer_offsets-42 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-42 (kafka.cluster.Partition) 23:16:46 kafka | [2024-03-11 23:14:49,280] INFO [Partition __consumer_offsets-42 broker=1] Log loaded for partition __consumer_offsets-42 with initial high watermark 0 (kafka.cluster.Partition) 23:16:46 kafka | [2024-03-11 23:14:49,280] INFO [Broker id=1] Leader __consumer_offsets-42 with topic id Some(pKZWw3Z-TCqlQNYpHAZfkA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:49,293] INFO [LogLoader partition=__consumer_offsets-12, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:46 kafka | [2024-03-11 23:14:49,294] INFO Created log for partition __consumer_offsets-12 in /var/lib/kafka/data/__consumer_offsets-12 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:46 kafka | [2024-03-11 23:14:49,294] INFO [Partition __consumer_offsets-12 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-12 (kafka.cluster.Partition) 23:16:46 kafka | [2024-03-11 23:14:49,294] INFO [Partition __consumer_offsets-12 broker=1] Log loaded for partition __consumer_offsets-12 with initial high watermark 0 (kafka.cluster.Partition) 23:16:46 kafka | [2024-03-11 23:14:49,294] INFO [Broker id=1] Leader __consumer_offsets-12 with topic id Some(pKZWw3Z-TCqlQNYpHAZfkA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:49,310] INFO [LogLoader partition=__consumer_offsets-21, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:46 kafka | [2024-03-11 23:14:49,311] INFO Created log for partition __consumer_offsets-21 in /var/lib/kafka/data/__consumer_offsets-21 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:46 kafka | [2024-03-11 23:14:49,311] INFO [Partition __consumer_offsets-21 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-21 (kafka.cluster.Partition) 23:16:46 kafka | [2024-03-11 23:14:49,311] INFO [Partition __consumer_offsets-21 broker=1] Log loaded for partition __consumer_offsets-21 with initial high watermark 0 (kafka.cluster.Partition) 23:16:46 kafka | [2024-03-11 23:14:49,311] INFO [Broker id=1] Leader __consumer_offsets-21 with topic id Some(pKZWw3Z-TCqlQNYpHAZfkA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:49,320] INFO [LogLoader partition=__consumer_offsets-36, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:46 kafka | [2024-03-11 23:14:49,320] INFO Created log for partition __consumer_offsets-36 in /var/lib/kafka/data/__consumer_offsets-36 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:46 kafka | [2024-03-11 23:14:49,320] INFO [Partition __consumer_offsets-36 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-36 (kafka.cluster.Partition) 23:16:46 kafka | [2024-03-11 23:14:49,320] INFO [Partition __consumer_offsets-36 broker=1] Log loaded for partition __consumer_offsets-36 with initial high watermark 0 (kafka.cluster.Partition) 23:16:46 kafka | [2024-03-11 23:14:49,320] INFO [Broker id=1] Leader __consumer_offsets-36 with topic id Some(pKZWw3Z-TCqlQNYpHAZfkA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:49,333] INFO [LogLoader partition=__consumer_offsets-6, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:46 kafka | [2024-03-11 23:14:49,333] INFO Created log for partition __consumer_offsets-6 in /var/lib/kafka/data/__consumer_offsets-6 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:46 kafka | [2024-03-11 23:14:49,333] INFO [Partition __consumer_offsets-6 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-6 (kafka.cluster.Partition) 23:16:46 kafka | [2024-03-11 23:14:49,333] INFO [Partition __consumer_offsets-6 broker=1] Log loaded for partition __consumer_offsets-6 with initial high watermark 0 (kafka.cluster.Partition) 23:16:46 kafka | [2024-03-11 23:14:49,333] INFO [Broker id=1] Leader __consumer_offsets-6 with topic id Some(pKZWw3Z-TCqlQNYpHAZfkA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:49,347] INFO [LogLoader partition=__consumer_offsets-43, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:46 kafka | [2024-03-11 23:14:49,347] INFO Created log for partition __consumer_offsets-43 in /var/lib/kafka/data/__consumer_offsets-43 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:46 kafka | [2024-03-11 23:14:49,347] INFO [Partition __consumer_offsets-43 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-43 (kafka.cluster.Partition) 23:16:46 kafka | [2024-03-11 23:14:49,347] INFO [Partition __consumer_offsets-43 broker=1] Log loaded for partition __consumer_offsets-43 with initial high watermark 0 (kafka.cluster.Partition) 23:16:46 kafka | [2024-03-11 23:14:49,348] INFO [Broker id=1] Leader __consumer_offsets-43 with topic id Some(pKZWw3Z-TCqlQNYpHAZfkA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:49,360] INFO [LogLoader partition=__consumer_offsets-13, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:46 kafka | [2024-03-11 23:14:49,362] INFO Created log for partition __consumer_offsets-13 in /var/lib/kafka/data/__consumer_offsets-13 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:46 kafka | [2024-03-11 23:14:49,362] INFO [Partition __consumer_offsets-13 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-13 (kafka.cluster.Partition) 23:16:46 kafka | [2024-03-11 23:14:49,362] INFO [Partition __consumer_offsets-13 broker=1] Log loaded for partition __consumer_offsets-13 with initial high watermark 0 (kafka.cluster.Partition) 23:16:46 kafka | [2024-03-11 23:14:49,362] INFO [Broker id=1] Leader __consumer_offsets-13 with topic id Some(pKZWw3Z-TCqlQNYpHAZfkA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:49,374] INFO [LogLoader partition=__consumer_offsets-28, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:46 kafka | [2024-03-11 23:14:49,375] INFO Created log for partition __consumer_offsets-28 in /var/lib/kafka/data/__consumer_offsets-28 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:46 kafka | [2024-03-11 23:14:49,375] INFO [Partition __consumer_offsets-28 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-28 (kafka.cluster.Partition) 23:16:46 kafka | [2024-03-11 23:14:49,375] INFO [Partition __consumer_offsets-28 broker=1] Log loaded for partition __consumer_offsets-28 with initial high watermark 0 (kafka.cluster.Partition) 23:16:46 kafka | [2024-03-11 23:14:49,375] INFO [Broker id=1] Leader __consumer_offsets-28 with topic id Some(pKZWw3Z-TCqlQNYpHAZfkA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:49,382] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-3 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:49,382] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-18 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:49,382] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-41 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:49,382] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-10 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:49,382] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-33 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:49,382] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-48 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:49,382] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-19 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:49,382] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-34 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:49,382] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-4 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:49,382] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-11 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:49,382] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-26 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:49,382] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-49 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:49,382] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-39 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:49,385] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-9 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:49,385] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-24 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:49,385] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-31 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:49,385] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-46 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:49,385] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-1 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:49,385] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-16 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:49,385] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-2 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:49,386] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-25 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:49,386] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-40 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:49,386] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-47 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:49,386] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-17 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:49,386] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-32 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:49,386] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-37 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:49,386] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-7 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:49,386] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-22 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:49,386] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-29 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:49,386] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-44 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:49,386] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-14 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:49,386] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-23 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:49,386] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-38 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:49,386] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-8 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:49,386] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:49,386] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-45 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:49,386] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-15 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:49,386] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-30 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:49,386] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-0 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:49,386] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-35 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:49,386] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-5 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:49,386] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-20 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:49,386] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-27 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:49,386] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-42 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:49,386] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-12 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:49,386] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-21 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:49,386] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-36 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:49,386] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-6 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:49,386] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-43 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:49,386] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-13 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:49,386] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-28 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:49,392] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 3 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:46 kafka | [2024-03-11 23:14:49,394] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-3 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:46 kafka | [2024-03-11 23:14:49,397] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 18 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:46 kafka | [2024-03-11 23:14:49,397] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-18 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:46 kafka | [2024-03-11 23:14:49,398] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 41 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:46 kafka | [2024-03-11 23:14:49,398] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-41 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:46 kafka | [2024-03-11 23:14:49,398] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 10 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:46 kafka | [2024-03-11 23:14:49,398] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-10 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:46 kafka | [2024-03-11 23:14:49,398] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 33 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:46 kafka | [2024-03-11 23:14:49,398] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-33 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:46 kafka | [2024-03-11 23:14:49,398] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 48 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:46 kafka | [2024-03-11 23:14:49,398] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-48 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:46 kafka | [2024-03-11 23:14:49,398] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 19 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:46 kafka | [2024-03-11 23:14:49,398] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-19 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:46 kafka | [2024-03-11 23:14:49,398] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 34 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:46 kafka | [2024-03-11 23:14:49,398] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-34 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:46 kafka | [2024-03-11 23:14:49,398] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 4 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:46 kafka | [2024-03-11 23:14:49,398] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-4 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:46 kafka | [2024-03-11 23:14:49,398] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 11 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:46 kafka | [2024-03-11 23:14:49,398] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-11 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:46 kafka | [2024-03-11 23:14:49,398] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 26 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:46 kafka | [2024-03-11 23:14:49,398] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-26 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:46 kafka | [2024-03-11 23:14:49,398] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 49 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:46 kafka | [2024-03-11 23:14:49,398] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-49 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:46 kafka | [2024-03-11 23:14:49,398] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 39 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:46 kafka | [2024-03-11 23:14:49,399] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-39 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:46 kafka | [2024-03-11 23:14:49,399] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 9 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:46 kafka | [2024-03-11 23:14:49,399] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-9 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:46 kafka | [2024-03-11 23:14:49,399] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 24 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:46 kafka | [2024-03-11 23:14:49,399] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-24 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:46 kafka | [2024-03-11 23:14:49,399] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 31 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:46 kafka | [2024-03-11 23:14:49,399] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-31 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:46 kafka | [2024-03-11 23:14:49,399] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 46 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:46 kafka | [2024-03-11 23:14:49,399] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-46 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:46 kafka | [2024-03-11 23:14:49,399] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 1 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:46 kafka | [2024-03-11 23:14:49,399] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-1 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:46 kafka | [2024-03-11 23:14:49,399] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 16 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:46 kafka | [2024-03-11 23:14:49,399] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-16 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:46 kafka | [2024-03-11 23:14:49,399] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 2 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:46 kafka | [2024-03-11 23:14:49,399] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-2 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:46 kafka | [2024-03-11 23:14:49,399] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 25 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:46 kafka | [2024-03-11 23:14:49,399] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-25 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:46 kafka | [2024-03-11 23:14:49,399] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 40 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:46 kafka | [2024-03-11 23:14:49,399] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-40 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:46 kafka | [2024-03-11 23:14:49,399] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 47 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:46 kafka | [2024-03-11 23:14:49,399] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-47 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:46 kafka | [2024-03-11 23:14:49,400] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 17 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:46 kafka | [2024-03-11 23:14:49,400] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-17 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:46 kafka | [2024-03-11 23:14:49,400] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 32 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:46 kafka | [2024-03-11 23:14:49,400] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-32 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:46 kafka | [2024-03-11 23:14:49,400] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 37 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:46 kafka | [2024-03-11 23:14:49,400] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-37 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:46 kafka | [2024-03-11 23:14:49,400] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 7 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:46 kafka | [2024-03-11 23:14:49,400] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-7 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:46 kafka | [2024-03-11 23:14:49,400] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 22 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:46 kafka | [2024-03-11 23:14:49,400] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-22 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:46 kafka | [2024-03-11 23:14:49,400] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 29 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:46 kafka | [2024-03-11 23:14:49,400] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-29 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:46 kafka | [2024-03-11 23:14:49,400] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 44 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:46 kafka | [2024-03-11 23:14:49,400] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-44 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:46 kafka | [2024-03-11 23:14:49,400] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 14 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:46 kafka | [2024-03-11 23:14:49,400] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-14 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:46 kafka | [2024-03-11 23:14:49,400] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 23 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:46 kafka | [2024-03-11 23:14:49,400] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-23 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:46 kafka | [2024-03-11 23:14:49,400] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 38 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:46 kafka | [2024-03-11 23:14:49,400] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-38 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:46 kafka | [2024-03-11 23:14:49,401] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 8 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:46 kafka | [2024-03-11 23:14:49,401] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-8 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:46 kafka | [2024-03-11 23:14:49,401] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 45 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:46 kafka | [2024-03-11 23:14:49,401] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-45 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:46 kafka | [2024-03-11 23:14:49,401] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 15 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:46 kafka | [2024-03-11 23:14:49,401] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-15 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:46 kafka | [2024-03-11 23:14:49,401] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 30 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:46 kafka | [2024-03-11 23:14:49,401] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-30 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:46 kafka | [2024-03-11 23:14:49,401] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 0 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:46 kafka | [2024-03-11 23:14:49,401] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-0 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:46 kafka | [2024-03-11 23:14:49,401] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 35 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:46 kafka | [2024-03-11 23:14:49,401] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-35 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:46 kafka | [2024-03-11 23:14:49,401] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 5 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:46 kafka | [2024-03-11 23:14:49,401] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-5 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:46 kafka | [2024-03-11 23:14:49,402] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 20 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:46 kafka | [2024-03-11 23:14:49,402] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-20 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:46 kafka | [2024-03-11 23:14:49,402] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 27 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:46 kafka | [2024-03-11 23:14:49,402] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-27 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:46 kafka | [2024-03-11 23:14:49,402] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 42 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:46 kafka | [2024-03-11 23:14:49,402] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-42 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:46 kafka | [2024-03-11 23:14:49,402] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 12 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:46 kafka | [2024-03-11 23:14:49,402] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-12 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:46 kafka | [2024-03-11 23:14:49,402] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 21 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:46 kafka | [2024-03-11 23:14:49,402] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-21 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:46 kafka | [2024-03-11 23:14:49,402] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 36 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:46 kafka | [2024-03-11 23:14:49,402] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-36 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:46 kafka | [2024-03-11 23:14:49,402] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 6 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:46 kafka | [2024-03-11 23:14:49,402] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-6 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:46 kafka | [2024-03-11 23:14:49,403] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 43 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:46 kafka | [2024-03-11 23:14:49,403] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-43 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:46 kafka | [2024-03-11 23:14:49,403] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 13 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:46 kafka | [2024-03-11 23:14:49,403] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-13 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:46 kafka | [2024-03-11 23:14:49,403] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 28 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:46 kafka | [2024-03-11 23:14:49,403] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-28 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:46 kafka | [2024-03-11 23:14:49,403] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-3 in 8 milliseconds for epoch 0, of which 4 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:46 kafka | [2024-03-11 23:14:49,409] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-18 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:46 kafka | [2024-03-11 23:14:49,409] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-41 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:46 kafka | [2024-03-11 23:14:49,409] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-10 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:46 kafka | [2024-03-11 23:14:49,409] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-33 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:46 kafka | [2024-03-11 23:14:49,409] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-48 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:46 kafka | [2024-03-11 23:14:49,409] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-19 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:46 kafka | [2024-03-11 23:14:49,410] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-34 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:46 kafka | [2024-03-11 23:14:49,410] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-4 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:46 kafka | [2024-03-11 23:14:49,410] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-11 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:46 kafka | [2024-03-11 23:14:49,410] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-26 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:46 kafka | [2024-03-11 23:14:49,410] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-49 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:46 kafka | [2024-03-11 23:14:49,410] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-39 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:46 kafka | [2024-03-11 23:14:49,410] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-9 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:46 kafka | [2024-03-11 23:14:49,410] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-24 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:46 kafka | [2024-03-11 23:14:49,410] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-31 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:46 kafka | [2024-03-11 23:14:49,410] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-46 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:46 kafka | [2024-03-11 23:14:49,411] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-1 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:46 kafka | [2024-03-11 23:14:49,411] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-16 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:46 kafka | [2024-03-11 23:14:49,411] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-2 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:46 kafka | [2024-03-11 23:14:49,411] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-25 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:46 kafka | [2024-03-11 23:14:49,411] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-40 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:46 kafka | [2024-03-11 23:14:49,411] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-47 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:46 kafka | [2024-03-11 23:14:49,411] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-17 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:46 kafka | [2024-03-11 23:14:49,411] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-32 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:46 kafka | [2024-03-11 23:14:49,411] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-37 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:46 kafka | [2024-03-11 23:14:49,411] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-7 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:46 kafka | [2024-03-11 23:14:49,412] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-22 in 12 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:46 kafka | [2024-03-11 23:14:49,412] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-29 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:46 kafka | [2024-03-11 23:14:49,412] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-44 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:46 kafka | [2024-03-11 23:14:49,412] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-14 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:46 kafka | [2024-03-11 23:14:49,412] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-23 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:46 kafka | [2024-03-11 23:14:49,412] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-38 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:46 kafka | [2024-03-11 23:14:49,412] INFO [Broker id=1] Finished LeaderAndIsr request in 819ms correlationId 1 from controller 1 for 51 partitions (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:49,412] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-8 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:46 kafka | [2024-03-11 23:14:49,412] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-45 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:46 kafka | [2024-03-11 23:14:49,412] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-15 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:46 kafka | [2024-03-11 23:14:49,412] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-30 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:46 kafka | [2024-03-11 23:14:49,412] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-0 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:46 kafka | [2024-03-11 23:14:49,413] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-35 in 12 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:46 kafka | [2024-03-11 23:14:49,413] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-5 in 12 milliseconds for epoch 0, of which 12 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:46 kafka | [2024-03-11 23:14:49,413] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-20 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:46 kafka | [2024-03-11 23:14:49,413] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-27 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:46 kafka | [2024-03-11 23:14:49,413] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-42 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:46 kafka | [2024-03-11 23:14:49,413] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-12 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:46 kafka | [2024-03-11 23:14:49,413] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-21 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:46 kafka | [2024-03-11 23:14:49,413] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-36 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:46 kafka | [2024-03-11 23:14:49,413] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-6 in 11 milliseconds for epoch 0, of which 11 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:46 kafka | [2024-03-11 23:14:49,413] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-43 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:46 kafka | [2024-03-11 23:14:49,413] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-13 in 10 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:46 kafka | [2024-03-11 23:14:49,414] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-28 in 11 milliseconds for epoch 0, of which 10 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:46 kafka | [2024-03-11 23:14:49,419] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=pKZWw3Z-TCqlQNYpHAZfkA, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=13, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=46, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=9, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=42, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=21, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=17, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=30, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=26, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=5, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=38, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=1, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=34, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=16, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=45, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=12, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=41, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=24, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=20, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=49, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=29, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=25, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=8, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=37, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=4, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=33, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=15, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=48, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=11, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=44, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=23, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=19, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=32, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=28, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=7, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=40, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=3, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=36, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=47, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=14, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=43, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=10, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=22, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=18, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=31, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=27, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=39, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=6, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=35, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=2, errorCode=0)]), LeaderAndIsrTopicError(topicId=MS889OzURY6CvMQUxdh95w, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 1 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:49,430] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition policy-pdp-pap-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:49,430] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-13 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:49,430] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-46 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:49,430] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-9 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:49,430] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-42 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:49,430] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-21 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:49,430] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-17 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:49,430] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-30 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:49,430] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-26 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:49,430] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-5 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:49,430] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-38 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:49,430] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-1 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:49,430] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-34 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:49,430] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-16 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:49,430] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-45 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:49,430] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-12 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:49,430] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-41 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:49,430] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-24 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:49,430] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-20 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:49,430] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-49 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:49,430] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:49,430] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-29 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:49,430] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-25 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:49,430] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-8 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:49,430] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-37 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:49,430] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-4 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:49,430] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-33 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:49,430] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-15 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:49,430] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-48 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:49,430] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-11 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:49,430] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-44 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:49,430] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-23 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:49,430] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-19 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:49,431] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-32 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:49,431] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-28 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:49,431] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-7 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:49,431] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-40 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:49,431] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-3 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:49,431] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-36 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:49,431] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-47 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:49,431] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-14 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:49,431] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-43 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:49,431] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-10 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:49,431] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-22 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:49,431] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-18 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:49,431] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-31 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:49,431] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-27 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:49,431] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-39 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:49,431] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-6 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:49,431] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-35 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:49,431] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-2 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:49,431] INFO [Broker id=1] Add 51 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:49,432] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 2 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 23:16:46 kafka | [2024-03-11 23:14:49,502] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group a774452f-60f0-41d2-bd1a-6ce78860e297 in Empty state. Created a new member id consumer-a774452f-60f0-41d2-bd1a-6ce78860e297-3-cf0c0076-0825-4c64-9f0a-60e35d58c444 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 23:16:46 kafka | [2024-03-11 23:14:49,514] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group policy-pap in Empty state. Created a new member id consumer-policy-pap-4-70d3bb56-183c-4198-8352-709b4ff83c81 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 23:16:46 kafka | [2024-03-11 23:14:49,522] INFO [GroupCoordinator 1]: Preparing to rebalance group a774452f-60f0-41d2-bd1a-6ce78860e297 in state PreparingRebalance with old generation 0 (__consumer_offsets-19) (reason: Adding new member consumer-a774452f-60f0-41d2-bd1a-6ce78860e297-3-cf0c0076-0825-4c64-9f0a-60e35d58c444 with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) 23:16:46 kafka | [2024-03-11 23:14:49,522] INFO [GroupCoordinator 1]: Preparing to rebalance group policy-pap in state PreparingRebalance with old generation 0 (__consumer_offsets-24) (reason: Adding new member consumer-policy-pap-4-70d3bb56-183c-4198-8352-709b4ff83c81 with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) 23:16:46 kafka | [2024-03-11 23:14:50,130] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group ba46bd84-7ae1-41fa-a3bb-e4918f472988 in Empty state. Created a new member id consumer-ba46bd84-7ae1-41fa-a3bb-e4918f472988-2-bdcbc697-b8f3-4f61-9d8f-a6ee69f6351a and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 23:16:46 kafka | [2024-03-11 23:14:50,133] INFO [GroupCoordinator 1]: Preparing to rebalance group ba46bd84-7ae1-41fa-a3bb-e4918f472988 in state PreparingRebalance with old generation 0 (__consumer_offsets-26) (reason: Adding new member consumer-ba46bd84-7ae1-41fa-a3bb-e4918f472988-2-bdcbc697-b8f3-4f61-9d8f-a6ee69f6351a with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) 23:16:46 kafka | [2024-03-11 23:14:52,536] INFO [GroupCoordinator 1]: Stabilized group policy-pap generation 1 (__consumer_offsets-24) with 1 members (kafka.coordinator.group.GroupCoordinator) 23:16:46 kafka | [2024-03-11 23:14:52,540] INFO [GroupCoordinator 1]: Stabilized group a774452f-60f0-41d2-bd1a-6ce78860e297 generation 1 (__consumer_offsets-19) with 1 members (kafka.coordinator.group.GroupCoordinator) 23:16:46 kafka | [2024-03-11 23:14:52,562] INFO [GroupCoordinator 1]: Assignment received from leader consumer-policy-pap-4-70d3bb56-183c-4198-8352-709b4ff83c81 for group policy-pap for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 23:16:46 kafka | [2024-03-11 23:14:52,563] INFO [GroupCoordinator 1]: Assignment received from leader consumer-a774452f-60f0-41d2-bd1a-6ce78860e297-3-cf0c0076-0825-4c64-9f0a-60e35d58c444 for group a774452f-60f0-41d2-bd1a-6ce78860e297 for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 23:16:46 kafka | [2024-03-11 23:14:53,135] INFO [GroupCoordinator 1]: Stabilized group ba46bd84-7ae1-41fa-a3bb-e4918f472988 generation 1 (__consumer_offsets-26) with 1 members (kafka.coordinator.group.GroupCoordinator) 23:16:46 kafka | [2024-03-11 23:14:53,151] INFO [GroupCoordinator 1]: Assignment received from leader consumer-ba46bd84-7ae1-41fa-a3bb-e4918f472988-2-bdcbc697-b8f3-4f61-9d8f-a6ee69f6351a for group ba46bd84-7ae1-41fa-a3bb-e4918f472988 for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 23:16:46 ++ echo 'Tearing down containers...' 23:16:46 Tearing down containers... 23:16:46 ++ docker-compose down -v --remove-orphans 23:16:47 Stopping policy-apex-pdp ... 23:16:47 Stopping grafana ... 23:16:47 Stopping policy-pap ... 23:16:47 Stopping policy-api ... 23:16:47 Stopping kafka ... 23:16:47 Stopping compose_zookeeper_1 ... 23:16:47 Stopping mariadb ... 23:16:47 Stopping simulator ... 23:16:47 Stopping prometheus ... 23:16:47 Stopping grafana ... done 23:16:48 Stopping prometheus ... done 23:16:57 Stopping policy-apex-pdp ... done 23:17:08 Stopping simulator ... done 23:17:08 Stopping policy-pap ... done 23:17:09 Stopping mariadb ... done 23:17:09 Stopping kafka ... done 23:17:09 Stopping compose_zookeeper_1 ... done 23:17:18 Stopping policy-api ... done 23:17:18 Removing policy-apex-pdp ... 23:17:18 Removing grafana ... 23:17:18 Removing policy-pap ... 23:17:18 Removing policy-api ... 23:17:18 Removing policy-db-migrator ... 23:17:18 Removing kafka ... 23:17:18 Removing compose_zookeeper_1 ... 23:17:18 Removing mariadb ... 23:17:18 Removing simulator ... 23:17:18 Removing prometheus ... 23:17:18 Removing policy-db-migrator ... done 23:17:18 Removing simulator ... done 23:17:18 Removing policy-api ... done 23:17:18 Removing mariadb ... done 23:17:18 Removing kafka ... done 23:17:18 Removing prometheus ... done 23:17:18 Removing grafana ... done 23:17:18 Removing compose_zookeeper_1 ... done 23:17:18 Removing policy-apex-pdp ... done 23:17:18 Removing policy-pap ... done 23:17:18 Removing network compose_default 23:17:18 ++ cd /w/workspace/policy-pap-master-project-csit-pap 23:17:18 + load_set 23:17:18 + _setopts=hxB 23:17:18 ++ echo braceexpand:hashall:interactive-comments:xtrace 23:17:18 ++ tr : ' ' 23:17:18 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:17:18 + set +o braceexpand 23:17:18 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:17:18 + set +o hashall 23:17:18 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:17:18 + set +o interactive-comments 23:17:18 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:17:18 + set +o xtrace 23:17:18 ++ echo hxB 23:17:18 ++ sed 's/./& /g' 23:17:18 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:17:18 + set +h 23:17:18 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:17:18 + set +x 23:17:18 + [[ -n /tmp/tmp.xIYONv2aFw ]] 23:17:18 + rsync -av /tmp/tmp.xIYONv2aFw/ /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap 23:17:18 sending incremental file list 23:17:18 ./ 23:17:18 log.html 23:17:18 output.xml 23:17:18 report.html 23:17:18 testplan.txt 23:17:18 23:17:18 sent 918,927 bytes received 95 bytes 1,838,044.00 bytes/sec 23:17:18 total size is 918,385 speedup is 1.00 23:17:18 + rm -rf /w/workspace/policy-pap-master-project-csit-pap/models 23:17:18 + exit 0 23:17:19 $ ssh-agent -k 23:17:19 unset SSH_AUTH_SOCK; 23:17:19 unset SSH_AGENT_PID; 23:17:19 echo Agent pid 2085 killed; 23:17:19 [ssh-agent] Stopped. 23:17:19 Robot results publisher started... 23:17:19 INFO: Checking test criticality is deprecated and will be dropped in a future release! 23:17:19 -Parsing output xml: 23:17:19 Done! 23:17:19 WARNING! Could not find file: **/log.html 23:17:19 WARNING! Could not find file: **/report.html 23:17:19 -Copying log files to build dir: 23:17:19 Done! 23:17:19 -Assigning results to build: 23:17:19 Done! 23:17:19 -Checking thresholds: 23:17:19 Done! 23:17:19 Done publishing Robot results. 23:17:19 [PostBuildScript] - [INFO] Executing post build scripts. 23:17:19 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins6569344860929802980.sh 23:17:19 ---> sysstat.sh 23:17:20 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins13524213368616749463.sh 23:17:20 ---> package-listing.sh 23:17:20 ++ facter osfamily 23:17:20 ++ tr '[:upper:]' '[:lower:]' 23:17:20 + OS_FAMILY=debian 23:17:20 + workspace=/w/workspace/policy-pap-master-project-csit-pap 23:17:20 + START_PACKAGES=/tmp/packages_start.txt 23:17:20 + END_PACKAGES=/tmp/packages_end.txt 23:17:20 + DIFF_PACKAGES=/tmp/packages_diff.txt 23:17:20 + PACKAGES=/tmp/packages_start.txt 23:17:20 + '[' /w/workspace/policy-pap-master-project-csit-pap ']' 23:17:20 + PACKAGES=/tmp/packages_end.txt 23:17:20 + case "${OS_FAMILY}" in 23:17:20 + dpkg -l 23:17:20 + grep '^ii' 23:17:20 + '[' -f /tmp/packages_start.txt ']' 23:17:20 + '[' -f /tmp/packages_end.txt ']' 23:17:20 + diff /tmp/packages_start.txt /tmp/packages_end.txt 23:17:20 + '[' /w/workspace/policy-pap-master-project-csit-pap ']' 23:17:20 + mkdir -p /w/workspace/policy-pap-master-project-csit-pap/archives/ 23:17:20 + cp -f /tmp/packages_diff.txt /tmp/packages_end.txt /tmp/packages_start.txt /w/workspace/policy-pap-master-project-csit-pap/archives/ 23:17:20 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins9000781594041325618.sh 23:17:20 ---> capture-instance-metadata.sh 23:17:20 Setup pyenv: 23:17:20 system 23:17:20 3.8.13 23:17:20 3.9.13 23:17:20 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) 23:17:20 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-OU5U from file:/tmp/.os_lf_venv 23:17:22 lf-activate-venv(): INFO: Installing: lftools 23:17:31 lf-activate-venv(): INFO: Adding /tmp/venv-OU5U/bin to PATH 23:17:31 INFO: Running in OpenStack, capturing instance metadata 23:17:32 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins17429416095744182930.sh 23:17:32 provisioning config files... 23:17:32 copy managed file [jenkins-log-archives-settings] to file:/w/workspace/policy-pap-master-project-csit-pap@tmp/config17523987800591313611tmp 23:17:32 Regular expression run condition: Expression=[^.*logs-s3.*], Label=[] 23:17:32 Run condition [Regular expression match] preventing perform for step [Provide Configuration files] 23:17:32 [EnvInject] - Injecting environment variables from a build step. 23:17:32 [EnvInject] - Injecting as environment variables the properties content 23:17:32 SERVER_ID=logs 23:17:32 23:17:32 [EnvInject] - Variables injected successfully. 23:17:32 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins5102830328937915227.sh 23:17:32 ---> create-netrc.sh 23:17:32 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins3115689211907797220.sh 23:17:32 ---> python-tools-install.sh 23:17:32 Setup pyenv: 23:17:32 system 23:17:32 3.8.13 23:17:32 3.9.13 23:17:32 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) 23:17:32 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-OU5U from file:/tmp/.os_lf_venv 23:17:34 lf-activate-venv(): INFO: Installing: lftools 23:17:42 lf-activate-venv(): INFO: Adding /tmp/venv-OU5U/bin to PATH 23:17:42 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins10830195130829243368.sh 23:17:42 ---> sudo-logs.sh 23:17:42 Archiving 'sudo' log.. 23:17:43 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins450656999378620533.sh 23:17:43 ---> job-cost.sh 23:17:43 Setup pyenv: 23:17:43 system 23:17:43 3.8.13 23:17:43 3.9.13 23:17:43 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) 23:17:43 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-OU5U from file:/tmp/.os_lf_venv 23:17:44 lf-activate-venv(): INFO: Installing: zipp==1.1.0 python-openstackclient urllib3~=1.26.15 23:17:49 lf-activate-venv(): INFO: Adding /tmp/venv-OU5U/bin to PATH 23:17:49 INFO: No Stack... 23:17:49 INFO: Retrieving Pricing Info for: v3-standard-8 23:17:49 INFO: Archiving Costs 23:17:49 [policy-pap-master-project-csit-pap] $ /bin/bash -l /tmp/jenkins5802965624565875268.sh 23:17:49 ---> logs-deploy.sh 23:17:49 Setup pyenv: 23:17:50 system 23:17:50 3.8.13 23:17:50 3.9.13 23:17:50 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) 23:17:50 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-OU5U from file:/tmp/.os_lf_venv 23:17:51 lf-activate-venv(): INFO: Installing: lftools 23:17:59 lf-activate-venv(): INFO: Adding /tmp/venv-OU5U/bin to PATH 23:17:59 INFO: Nexus URL https://nexus.onap.org path production/vex-yul-ecomp-jenkins-1/policy-pap-master-project-csit-pap/1607 23:17:59 INFO: archiving workspace using pattern(s): -p **/target/surefire-reports/*-output.txt 23:18:00 Archives upload complete. 23:18:00 INFO: archiving logs to Nexus 23:18:01 ---> uname -a: 23:18:01 Linux prd-ubuntu1804-docker-8c-8g-12595 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux 23:18:01 23:18:01 23:18:01 ---> lscpu: 23:18:01 Architecture: x86_64 23:18:01 CPU op-mode(s): 32-bit, 64-bit 23:18:01 Byte Order: Little Endian 23:18:01 CPU(s): 8 23:18:01 On-line CPU(s) list: 0-7 23:18:01 Thread(s) per core: 1 23:18:01 Core(s) per socket: 1 23:18:01 Socket(s): 8 23:18:01 NUMA node(s): 1 23:18:01 Vendor ID: AuthenticAMD 23:18:01 CPU family: 23 23:18:01 Model: 49 23:18:01 Model name: AMD EPYC-Rome Processor 23:18:01 Stepping: 0 23:18:01 CPU MHz: 2799.998 23:18:01 BogoMIPS: 5599.99 23:18:01 Virtualization: AMD-V 23:18:01 Hypervisor vendor: KVM 23:18:01 Virtualization type: full 23:18:01 L1d cache: 32K 23:18:01 L1i cache: 32K 23:18:01 L2 cache: 512K 23:18:01 L3 cache: 16384K 23:18:01 NUMA node0 CPU(s): 0-7 23:18:01 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl xtopology cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr arat npt nrip_save umip rdpid arch_capabilities 23:18:01 23:18:01 23:18:01 ---> nproc: 23:18:01 8 23:18:01 23:18:01 23:18:01 ---> df -h: 23:18:01 Filesystem Size Used Avail Use% Mounted on 23:18:01 udev 16G 0 16G 0% /dev 23:18:01 tmpfs 3.2G 708K 3.2G 1% /run 23:18:01 /dev/vda1 155G 14G 142G 9% / 23:18:01 tmpfs 16G 0 16G 0% /dev/shm 23:18:01 tmpfs 5.0M 0 5.0M 0% /run/lock 23:18:01 tmpfs 16G 0 16G 0% /sys/fs/cgroup 23:18:01 /dev/vda15 105M 4.4M 100M 5% /boot/efi 23:18:01 tmpfs 3.2G 0 3.2G 0% /run/user/1001 23:18:01 23:18:01 23:18:01 ---> free -m: 23:18:01 total used free shared buff/cache available 23:18:01 Mem: 32167 856 24868 0 6441 30854 23:18:01 Swap: 1023 0 1023 23:18:01 23:18:01 23:18:01 ---> ip addr: 23:18:01 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 23:18:01 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 23:18:01 inet 127.0.0.1/8 scope host lo 23:18:01 valid_lft forever preferred_lft forever 23:18:01 inet6 ::1/128 scope host 23:18:01 valid_lft forever preferred_lft forever 23:18:01 2: ens3: mtu 1458 qdisc mq state UP group default qlen 1000 23:18:01 link/ether fa:16:3e:92:83:fe brd ff:ff:ff:ff:ff:ff 23:18:01 inet 10.30.106.169/23 brd 10.30.107.255 scope global dynamic ens3 23:18:01 valid_lft 85940sec preferred_lft 85940sec 23:18:01 inet6 fe80::f816:3eff:fe92:83fe/64 scope link 23:18:01 valid_lft forever preferred_lft forever 23:18:01 3: docker0: mtu 1500 qdisc noqueue state DOWN group default 23:18:01 link/ether 02:42:36:48:56:3c brd ff:ff:ff:ff:ff:ff 23:18:01 inet 10.250.0.254/24 brd 10.250.0.255 scope global docker0 23:18:01 valid_lft forever preferred_lft forever 23:18:01 23:18:01 23:18:01 ---> sar -b -r -n DEV: 23:18:01 Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-12595) 03/11/24 _x86_64_ (8 CPU) 23:18:01 23:18:01 23:10:24 LINUX RESTART (8 CPU) 23:18:01 23:18:01 23:11:01 tps rtps wtps bread/s bwrtn/s 23:18:01 23:12:01 123.81 43.43 80.39 1933.14 27856.16 23:18:01 23:13:01 128.23 23.21 105.02 2794.87 32952.91 23:18:01 23:14:01 232.14 0.20 231.94 21.46 140018.26 23:18:01 23:15:01 338.28 12.48 325.80 805.50 49216.05 23:18:01 23:16:01 19.55 0.02 19.53 0.13 20986.25 23:18:01 23:17:01 27.83 0.02 27.81 0.13 22110.31 23:18:01 23:18:01 73.94 2.22 71.72 126.25 19371.10 23:18:01 Average: 134.83 11.65 123.17 811.64 44644.44 23:18:01 23:18:01 23:11:01 kbmemfree kbavail kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty 23:18:01 23:12:01 30074976 31661152 2864244 8.70 69604 1826948 1487488 4.38 909144 1662516 152008 23:18:01 23:13:01 28762448 31649956 4176772 12.68 100052 3057976 1618720 4.76 1001232 2796652 1041056 23:18:01 23:14:01 25536596 31658800 7402624 22.47 142144 6096932 1467228 4.32 1030984 5832968 535148 23:18:01 23:15:01 23091064 29380016 9848156 29.90 158368 6227820 9141060 26.90 3487780 5739600 1700 23:18:01 23:16:01 23103932 29393600 9835288 29.86 158556 6228112 9028748 26.56 3477868 5737228 288 23:18:01 23:17:01 23344452 29659192 9594768 29.13 159012 6255984 7464296 21.96 3236728 5751200 176 23:18:01 23:18:01 25493440 31622824 7445780 22.60 160928 6083936 1519984 4.47 1305556 5586444 1660 23:18:01 Average: 25629558 30717934 7309662 22.19 135523 5111101 4532503 13.34 2064185 4729515 247434 23:18:01 23:18:01 23:11:01 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil 23:18:01 23:12:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:18:01 23:12:01 lo 1.67 1.67 0.18 0.18 0.00 0.00 0.00 0.00 23:18:01 23:12:01 ens3 237.39 162.19 1151.80 43.69 0.00 0.00 0.00 0.00 23:18:01 23:13:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:18:01 23:13:01 lo 7.13 7.13 0.66 0.66 0.00 0.00 0.00 0.00 23:18:01 23:13:01 ens3 226.56 150.59 6510.39 17.45 0.00 0.00 0.00 0.00 23:18:01 23:13:01 br-6366207df32f 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:18:01 23:14:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:18:01 23:14:01 lo 6.47 6.47 0.65 0.65 0.00 0.00 0.00 0.00 23:18:01 23:14:01 ens3 1082.50 543.24 27021.53 40.20 0.00 0.00 0.00 0.00 23:18:01 23:14:01 br-6366207df32f 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:18:01 23:15:01 veth493851b 0.35 0.63 0.03 0.63 0.00 0.00 0.00 0.00 23:18:01 23:15:01 veth49bd70b 0.53 0.85 0.06 0.31 0.00 0.00 0.00 0.00 23:18:01 23:15:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:18:01 23:15:01 veth2024c5a 1.95 2.27 0.38 0.21 0.00 0.00 0.00 0.00 23:18:01 23:16:01 veth493851b 0.53 0.53 0.05 1.51 0.00 0.00 0.00 0.00 23:18:01 23:16:01 veth49bd70b 0.25 0.20 0.02 0.01 0.00 0.00 0.00 0.00 23:18:01 23:16:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:18:01 23:16:01 veth2024c5a 3.75 5.22 0.77 0.47 0.00 0.00 0.00 0.00 23:18:01 23:17:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:18:01 23:17:01 lo 34.59 34.59 6.17 6.17 0.00 0.00 0.00 0.00 23:18:01 23:17:01 ens3 1765.77 1000.05 35194.67 145.56 0.00 0.00 0.00 0.00 23:18:01 23:17:01 vethf1e53ea 39.49 30.13 3.84 4.30 0.00 0.00 0.00 0.00 23:18:01 23:18:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:18:01 23:18:01 lo 0.73 0.73 0.08 0.08 0.00 0.00 0.00 0.00 23:18:01 23:18:01 ens3 55.07 42.54 73.17 20.40 0.00 0.00 0.00 0.00 23:18:01 Average: docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:18:01 Average: lo 4.49 4.49 0.85 0.85 0.00 0.00 0.00 0.00 23:18:01 Average: ens3 232.66 131.63 4967.33 20.22 0.00 0.00 0.00 0.00 23:18:01 23:18:01 23:18:01 ---> sar -P ALL: 23:18:01 Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-12595) 03/11/24 _x86_64_ (8 CPU) 23:18:01 23:18:01 23:10:24 LINUX RESTART (8 CPU) 23:18:01 23:18:01 23:11:01 CPU %user %nice %system %iowait %steal %idle 23:18:01 23:12:01 all 11.19 0.00 0.85 2.56 0.04 85.36 23:18:01 23:12:01 0 5.72 0.00 0.80 0.53 0.05 92.89 23:18:01 23:12:01 1 2.57 0.00 0.67 12.27 0.05 84.44 23:18:01 23:12:01 2 4.13 0.00 0.38 0.25 0.03 95.21 23:18:01 23:12:01 3 3.28 0.00 0.32 0.38 0.02 96.00 23:18:01 23:12:01 4 1.87 0.00 0.48 3.30 0.02 94.33 23:18:01 23:12:01 5 37.55 0.00 2.14 2.44 0.10 57.77 23:18:01 23:12:01 6 24.12 0.00 1.35 0.87 0.07 73.59 23:18:01 23:12:01 7 10.35 0.00 0.65 0.48 0.02 88.50 23:18:01 23:13:01 all 11.17 0.00 1.84 2.20 0.04 84.74 23:18:01 23:13:01 0 21.77 0.00 2.53 4.34 0.08 71.27 23:18:01 23:13:01 1 16.56 0.00 1.93 10.70 0.05 70.76 23:18:01 23:13:01 2 5.89 0.00 1.48 0.20 0.02 92.42 23:18:01 23:13:01 3 14.06 0.00 2.28 0.97 0.03 82.65 23:18:01 23:13:01 4 6.35 0.00 1.79 0.18 0.03 91.64 23:18:01 23:13:01 5 13.21 0.00 1.51 0.39 0.02 84.88 23:18:01 23:13:01 6 7.42 0.00 1.59 0.72 0.03 90.24 23:18:01 23:13:01 7 4.12 0.00 1.61 0.12 0.03 94.12 23:18:01 23:14:01 all 12.27 0.00 5.90 7.53 0.07 74.24 23:18:01 23:14:01 0 14.67 0.00 4.74 2.36 0.08 78.15 23:18:01 23:14:01 1 11.57 0.00 7.38 30.93 0.07 50.06 23:18:01 23:14:01 2 13.02 0.00 5.21 11.63 0.08 70.06 23:18:01 23:14:01 3 10.30 0.00 7.17 3.03 0.07 79.44 23:18:01 23:14:01 4 11.31 0.00 5.53 0.31 0.07 82.79 23:18:01 23:14:01 5 13.89 0.00 5.41 0.09 0.07 80.55 23:18:01 23:14:01 6 12.29 0.00 6.12 0.17 0.07 81.35 23:18:01 23:14:01 7 11.11 0.00 5.60 11.79 0.09 71.41 23:18:01 23:15:01 all 29.42 0.00 3.90 2.97 0.07 63.64 23:18:01 23:15:01 0 26.57 0.00 3.41 0.97 0.08 68.97 23:18:01 23:15:01 1 23.51 0.00 3.51 13.30 0.08 59.60 23:18:01 23:15:01 2 24.08 0.00 3.77 1.76 0.07 70.32 23:18:01 23:15:01 3 34.84 0.00 4.52 3.80 0.08 56.76 23:18:01 23:15:01 4 31.72 0.00 4.02 0.94 0.07 63.25 23:18:01 23:15:01 5 35.68 0.00 3.89 0.62 0.07 59.75 23:18:01 23:15:01 6 32.51 0.00 4.46 1.21 0.07 61.76 23:18:01 23:15:01 7 26.54 0.00 3.65 1.07 0.07 68.67 23:18:01 23:16:01 all 4.90 0.00 0.47 1.14 0.05 93.44 23:18:01 23:16:01 0 4.06 0.00 0.38 0.00 0.03 95.52 23:18:01 23:16:01 1 5.64 0.00 0.38 8.58 0.07 85.33 23:18:01 23:16:01 2 5.00 0.00 0.50 0.02 0.02 94.47 23:18:01 23:16:01 3 5.46 0.00 0.65 0.05 0.03 93.80 23:18:01 23:16:01 4 4.55 0.00 0.40 0.08 0.03 94.94 23:18:01 23:16:01 5 2.92 0.00 0.47 0.08 0.03 96.49 23:18:01 23:16:01 6 7.26 0.00 0.72 0.02 0.03 91.97 23:18:01 23:16:01 7 4.30 0.00 0.32 0.22 0.10 95.07 23:18:01 23:17:01 all 1.52 0.00 0.37 1.15 0.03 96.92 23:18:01 23:17:01 0 1.49 0.00 0.40 0.02 0.03 98.06 23:18:01 23:17:01 1 2.02 0.00 0.30 8.26 0.03 89.39 23:18:01 23:17:01 2 1.02 0.00 0.45 0.00 0.03 98.50 23:18:01 23:17:01 3 1.32 0.00 0.40 0.40 0.02 97.87 23:18:01 23:17:01 4 2.10 0.00 0.37 0.03 0.03 97.46 23:18:01 23:17:01 5 1.79 0.00 0.35 0.02 0.03 97.81 23:18:01 23:17:01 6 1.37 0.00 0.25 0.12 0.03 98.23 23:18:01 23:17:01 7 1.04 0.00 0.42 0.32 0.03 98.20 23:18:01 23:18:01 all 8.18 0.00 0.66 1.08 0.03 90.05 23:18:01 23:18:01 0 9.72 0.00 0.57 0.25 0.03 89.42 23:18:01 23:18:01 1 2.85 0.00 0.56 7.05 0.03 89.50 23:18:01 23:18:01 2 2.99 0.00 0.60 0.27 0.02 96.13 23:18:01 23:18:01 3 1.12 0.00 0.60 0.05 0.03 98.20 23:18:01 23:18:01 4 0.93 0.00 0.48 0.25 0.02 98.31 23:18:01 23:18:01 5 29.15 0.00 1.15 0.35 0.05 69.29 23:18:01 23:18:01 6 5.00 0.00 0.57 0.17 0.02 94.25 23:18:01 23:18:01 7 13.77 0.00 0.74 0.23 0.02 85.25 23:18:01 Average: all 11.22 0.00 1.99 2.65 0.05 84.10 23:18:01 Average: 0 11.99 0.00 1.83 1.21 0.06 84.92 23:18:01 Average: 1 9.22 0.00 2.08 12.94 0.05 75.71 23:18:01 Average: 2 8.00 0.00 1.76 2.00 0.04 88.21 23:18:01 Average: 3 10.02 0.00 2.26 1.23 0.04 86.44 23:18:01 Average: 4 8.37 0.00 1.86 0.73 0.04 89.00 23:18:01 Average: 5 19.18 0.00 2.12 0.57 0.05 78.08 23:18:01 Average: 6 12.84 0.00 2.14 0.47 0.05 84.51 23:18:01 Average: 7 10.17 0.00 1.84 2.00 0.05 85.93 23:18:01 23:18:01 23:18:01