23:10:56 Started by timer 23:10:56 Running as SYSTEM 23:10:56 [EnvInject] - Loading node environment variables. 23:10:56 Building remotely on prd-ubuntu1804-docker-8c-8g-10134 (ubuntu1804-docker-8c-8g) in workspace /w/workspace/policy-pap-master-project-csit-pap 23:10:56 [ssh-agent] Looking for ssh-agent implementation... 23:10:56 [ssh-agent] Exec ssh-agent (binary ssh-agent on a remote machine) 23:10:56 $ ssh-agent 23:10:56 SSH_AUTH_SOCK=/tmp/ssh-xovt9zhShf3H/agent.2082 23:10:56 SSH_AGENT_PID=2084 23:10:56 [ssh-agent] Started. 23:10:56 Running ssh-add (command line suppressed) 23:10:56 Identity added: /w/workspace/policy-pap-master-project-csit-pap@tmp/private_key_12036637104213518731.key (/w/workspace/policy-pap-master-project-csit-pap@tmp/private_key_12036637104213518731.key) 23:10:56 [ssh-agent] Using credentials onap-jobbuiler (Gerrit user) 23:10:56 The recommended git tool is: NONE 23:10:58 using credential onap-jenkins-ssh 23:10:58 Wiping out workspace first. 23:10:58 Cloning the remote Git repository 23:10:58 Cloning repository git://cloud.onap.org/mirror/policy/docker.git 23:10:58 > git init /w/workspace/policy-pap-master-project-csit-pap # timeout=10 23:10:58 Fetching upstream changes from git://cloud.onap.org/mirror/policy/docker.git 23:10:58 > git --version # timeout=10 23:10:58 > git --version # 'git version 2.17.1' 23:10:58 using GIT_SSH to set credentials Gerrit user 23:10:58 Verifying host key using manually-configured host key entries 23:10:58 > git fetch --tags --progress -- git://cloud.onap.org/mirror/policy/docker.git +refs/heads/*:refs/remotes/origin/* # timeout=30 23:10:59 > git config remote.origin.url git://cloud.onap.org/mirror/policy/docker.git # timeout=10 23:10:59 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10 23:10:59 Avoid second fetch 23:10:59 > git rev-parse refs/remotes/origin/master^{commit} # timeout=10 23:10:59 Checking out Revision 5582cd406c8414919c4d5d7f5b116f4f1e5a971d (refs/remotes/origin/master) 23:10:59 > git config core.sparsecheckout # timeout=10 23:10:59 > git checkout -f 5582cd406c8414919c4d5d7f5b116f4f1e5a971d # timeout=30 23:10:59 Commit message: "Merge "Add ACM regression test suite"" 23:10:59 > git rev-list --no-walk 5582cd406c8414919c4d5d7f5b116f4f1e5a971d # timeout=10 23:10:59 provisioning config files... 23:10:59 copy managed file [npmrc] to file:/home/jenkins/.npmrc 23:10:59 copy managed file [pipconf] to file:/home/jenkins/.config/pip/pip.conf 23:10:59 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins15161470854604570350.sh 23:10:59 ---> python-tools-install.sh 23:10:59 Setup pyenv: 23:10:59 * system (set by /opt/pyenv/version) 23:11:00 * 3.8.13 (set by /opt/pyenv/version) 23:11:00 * 3.9.13 (set by /opt/pyenv/version) 23:11:00 * 3.10.6 (set by /opt/pyenv/version) 23:11:04 lf-activate-venv(): INFO: Creating python3 venv at /tmp/venv-6x4h 23:11:04 lf-activate-venv(): INFO: Save venv in file: /tmp/.os_lf_venv 23:11:07 lf-activate-venv(): INFO: Installing: lftools 23:11:40 lf-activate-venv(): INFO: Adding /tmp/venv-6x4h/bin to PATH 23:11:40 Generating Requirements File 23:12:08 ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. 23:12:08 lftools 0.37.9 requires openstacksdk>=2.1.0, but you have openstacksdk 0.62.0 which is incompatible. 23:12:08 Python 3.10.6 23:12:08 pip 24.0 from /tmp/venv-6x4h/lib/python3.10/site-packages/pip (python 3.10) 23:12:09 appdirs==1.4.4 23:12:09 argcomplete==3.2.2 23:12:09 aspy.yaml==1.3.0 23:12:09 attrs==23.2.0 23:12:09 autopage==0.5.2 23:12:09 beautifulsoup4==4.12.3 23:12:09 boto3==1.34.54 23:12:09 botocore==1.34.54 23:12:09 bs4==0.0.2 23:12:09 cachetools==5.3.3 23:12:09 certifi==2024.2.2 23:12:09 cffi==1.16.0 23:12:09 cfgv==3.4.0 23:12:09 chardet==5.2.0 23:12:09 charset-normalizer==3.3.2 23:12:09 click==8.1.7 23:12:09 cliff==4.6.0 23:12:09 cmd2==2.4.3 23:12:09 cryptography==3.3.2 23:12:09 debtcollector==3.0.0 23:12:09 decorator==5.1.1 23:12:09 defusedxml==0.7.1 23:12:09 Deprecated==1.2.14 23:12:09 distlib==0.3.8 23:12:09 dnspython==2.6.1 23:12:09 docker==4.2.2 23:12:09 dogpile.cache==1.3.2 23:12:09 email_validator==2.1.1 23:12:09 filelock==3.13.1 23:12:09 future==1.0.0 23:12:09 gitdb==4.0.11 23:12:09 GitPython==3.1.42 23:12:09 google-auth==2.28.1 23:12:09 httplib2==0.22.0 23:12:09 identify==2.5.35 23:12:09 idna==3.6 23:12:09 importlib-resources==1.5.0 23:12:09 iso8601==2.1.0 23:12:09 Jinja2==3.1.3 23:12:09 jmespath==1.0.1 23:12:09 jsonpatch==1.33 23:12:09 jsonpointer==2.4 23:12:09 jsonschema==4.21.1 23:12:09 jsonschema-specifications==2023.12.1 23:12:09 keystoneauth1==5.6.0 23:12:09 kubernetes==29.0.0 23:12:09 lftools==0.37.9 23:12:09 lxml==5.1.0 23:12:09 MarkupSafe==2.1.5 23:12:09 msgpack==1.0.8 23:12:09 multi_key_dict==2.0.3 23:12:09 munch==4.0.0 23:12:09 netaddr==1.2.1 23:12:09 netifaces==0.11.0 23:12:09 niet==1.4.2 23:12:09 nodeenv==1.8.0 23:12:09 oauth2client==4.1.3 23:12:09 oauthlib==3.2.2 23:12:09 openstacksdk==0.62.0 23:12:09 os-client-config==2.1.0 23:12:09 os-service-types==1.7.0 23:12:09 osc-lib==3.0.1 23:12:09 oslo.config==9.4.0 23:12:09 oslo.context==5.5.0 23:12:09 oslo.i18n==6.3.0 23:12:09 oslo.log==5.5.0 23:12:09 oslo.serialization==5.4.0 23:12:09 oslo.utils==7.1.0 23:12:09 packaging==23.2 23:12:09 pbr==6.0.0 23:12:09 platformdirs==4.2.0 23:12:09 prettytable==3.10.0 23:12:09 pyasn1==0.5.1 23:12:09 pyasn1-modules==0.3.0 23:12:09 pycparser==2.21 23:12:09 pygerrit2==2.0.15 23:12:09 PyGithub==2.2.0 23:12:09 pyinotify==0.9.6 23:12:09 PyJWT==2.8.0 23:12:09 PyNaCl==1.5.0 23:12:09 pyparsing==2.4.7 23:12:09 pyperclip==1.8.2 23:12:09 pyrsistent==0.20.0 23:12:09 python-cinderclient==9.5.0 23:12:09 python-dateutil==2.9.0.post0 23:12:09 python-heatclient==3.5.0 23:12:09 python-jenkins==1.8.2 23:12:09 python-keystoneclient==5.4.0 23:12:09 python-magnumclient==4.4.0 23:12:09 python-novaclient==18.5.0 23:12:09 python-openstackclient==6.0.1 23:12:09 python-swiftclient==4.5.0 23:12:09 PyYAML==6.0.1 23:12:09 referencing==0.33.0 23:12:09 requests==2.31.0 23:12:09 requests-oauthlib==1.3.1 23:12:09 requestsexceptions==1.4.0 23:12:09 rfc3986==2.0.0 23:12:09 rpds-py==0.18.0 23:12:09 rsa==4.9 23:12:09 ruamel.yaml==0.18.6 23:12:09 ruamel.yaml.clib==0.2.8 23:12:09 s3transfer==0.10.0 23:12:09 simplejson==3.19.2 23:12:09 six==1.16.0 23:12:09 smmap==5.0.1 23:12:09 soupsieve==2.5 23:12:09 stevedore==5.2.0 23:12:09 tabulate==0.9.0 23:12:09 toml==0.10.2 23:12:09 tomlkit==0.12.4 23:12:09 tqdm==4.66.2 23:12:09 typing_extensions==4.10.0 23:12:09 tzdata==2024.1 23:12:09 urllib3==1.26.18 23:12:09 virtualenv==20.25.1 23:12:09 wcwidth==0.2.13 23:12:09 websocket-client==1.7.0 23:12:09 wrapt==1.16.0 23:12:09 xdg==6.0.0 23:12:09 xmltodict==0.13.0 23:12:09 yq==3.2.3 23:12:09 [EnvInject] - Injecting environment variables from a build step. 23:12:09 [EnvInject] - Injecting as environment variables the properties content 23:12:09 SET_JDK_VERSION=openjdk17 23:12:09 GIT_URL="git://cloud.onap.org/mirror" 23:12:09 23:12:09 [EnvInject] - Variables injected successfully. 23:12:09 [policy-pap-master-project-csit-pap] $ /bin/sh /tmp/jenkins13148097003192653678.sh 23:12:09 ---> update-java-alternatives.sh 23:12:09 ---> Updating Java version 23:12:09 ---> Ubuntu/Debian system detected 23:12:09 update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/java to provide /usr/bin/java (java) in manual mode 23:12:09 update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64/bin/javac to provide /usr/bin/javac (javac) in manual mode 23:12:09 update-alternatives: using /usr/lib/jvm/java-17-openjdk-amd64 to provide /usr/lib/jvm/java-openjdk (java_sdk_openjdk) in manual mode 23:12:09 openjdk version "17.0.4" 2022-07-19 23:12:09 OpenJDK Runtime Environment (build 17.0.4+8-Ubuntu-118.04) 23:12:09 OpenJDK 64-Bit Server VM (build 17.0.4+8-Ubuntu-118.04, mixed mode, sharing) 23:12:09 JAVA_HOME=/usr/lib/jvm/java-17-openjdk-amd64 23:12:09 [EnvInject] - Injecting environment variables from a build step. 23:12:09 [EnvInject] - Injecting as environment variables the properties file path '/tmp/java.env' 23:12:09 [EnvInject] - Variables injected successfully. 23:12:09 [policy-pap-master-project-csit-pap] $ /bin/sh -xe /tmp/jenkins164350957592208568.sh 23:12:09 + /w/workspace/policy-pap-master-project-csit-pap/csit/run-project-csit.sh pap 23:12:09 + set +u 23:12:09 + save_set 23:12:09 + RUN_CSIT_SAVE_SET=ehxB 23:12:09 + RUN_CSIT_SHELLOPTS=braceexpand:errexit:hashall:interactive-comments:pipefail:xtrace 23:12:09 + '[' 1 -eq 0 ']' 23:12:09 + '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' 23:12:09 + export PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 23:12:09 + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 23:12:09 + export SCRIPTS=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts 23:12:09 + SCRIPTS=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts 23:12:09 + export ROBOT_VARIABLES= 23:12:09 + ROBOT_VARIABLES= 23:12:09 + export PROJECT=pap 23:12:09 + PROJECT=pap 23:12:09 + cd /w/workspace/policy-pap-master-project-csit-pap 23:12:09 + rm -rf /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap 23:12:09 + mkdir -p /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap 23:12:09 + source_safely /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/prepare-robot-env.sh 23:12:09 + '[' -z /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/prepare-robot-env.sh ']' 23:12:09 + relax_set 23:12:09 + set +e 23:12:09 + set +o pipefail 23:12:09 + . /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/prepare-robot-env.sh 23:12:09 ++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' 23:12:09 +++ mktemp -d 23:12:09 ++ ROBOT_VENV=/tmp/tmp.rVU7Gi2IXs 23:12:09 ++ echo ROBOT_VENV=/tmp/tmp.rVU7Gi2IXs 23:12:09 +++ python3 --version 23:12:09 ++ echo 'Python version is: Python 3.6.9' 23:12:09 Python version is: Python 3.6.9 23:12:09 ++ python3 -m venv --clear /tmp/tmp.rVU7Gi2IXs 23:12:11 ++ source /tmp/tmp.rVU7Gi2IXs/bin/activate 23:12:11 +++ deactivate nondestructive 23:12:11 +++ '[' -n '' ']' 23:12:11 +++ '[' -n '' ']' 23:12:11 +++ '[' -n /bin/bash -o -n '' ']' 23:12:11 +++ hash -r 23:12:11 +++ '[' -n '' ']' 23:12:11 +++ unset VIRTUAL_ENV 23:12:11 +++ '[' '!' nondestructive = nondestructive ']' 23:12:11 +++ VIRTUAL_ENV=/tmp/tmp.rVU7Gi2IXs 23:12:11 +++ export VIRTUAL_ENV 23:12:11 +++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 23:12:11 +++ PATH=/tmp/tmp.rVU7Gi2IXs/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 23:12:11 +++ export PATH 23:12:11 +++ '[' -n '' ']' 23:12:11 +++ '[' -z '' ']' 23:12:11 +++ _OLD_VIRTUAL_PS1= 23:12:11 +++ '[' 'x(tmp.rVU7Gi2IXs) ' '!=' x ']' 23:12:11 +++ PS1='(tmp.rVU7Gi2IXs) ' 23:12:11 +++ export PS1 23:12:11 +++ '[' -n /bin/bash -o -n '' ']' 23:12:11 +++ hash -r 23:12:11 ++ set -exu 23:12:11 ++ python3 -m pip install -qq --upgrade 'pip<=23.0' 'setuptools<=66.1.1' 23:12:14 ++ echo 'Installing Python Requirements' 23:12:14 Installing Python Requirements 23:12:14 ++ python3 -m pip install -qq -r /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/pylibs.txt 23:12:32 ++ python3 -m pip -qq freeze 23:12:32 bcrypt==4.0.1 23:12:32 beautifulsoup4==4.12.3 23:12:32 bitarray==2.9.2 23:12:32 certifi==2024.2.2 23:12:32 cffi==1.15.1 23:12:32 charset-normalizer==2.0.12 23:12:32 cryptography==40.0.2 23:12:32 decorator==5.1.1 23:12:32 elasticsearch==7.17.9 23:12:32 elasticsearch-dsl==7.4.1 23:12:32 enum34==1.1.10 23:12:32 idna==3.6 23:12:32 importlib-resources==5.4.0 23:12:32 ipaddr==2.2.0 23:12:32 isodate==0.6.1 23:12:32 jmespath==0.10.0 23:12:32 jsonpatch==1.32 23:12:32 jsonpath-rw==1.4.0 23:12:32 jsonpointer==2.3 23:12:32 lxml==5.1.0 23:12:32 netaddr==0.8.0 23:12:32 netifaces==0.11.0 23:12:32 odltools==0.1.28 23:12:32 paramiko==3.4.0 23:12:32 pkg_resources==0.0.0 23:12:32 ply==3.11 23:12:32 pyang==2.6.0 23:12:32 pyangbind==0.8.1 23:12:32 pycparser==2.21 23:12:32 pyhocon==0.3.60 23:12:32 PyNaCl==1.5.0 23:12:32 pyparsing==3.1.1 23:12:32 python-dateutil==2.9.0.post0 23:12:32 regex==2023.8.8 23:12:32 requests==2.27.1 23:12:32 robotframework==6.1.1 23:12:32 robotframework-httplibrary==0.4.2 23:12:32 robotframework-pythonlibcore==3.0.0 23:12:32 robotframework-requests==0.9.4 23:12:32 robotframework-selenium2library==3.0.0 23:12:32 robotframework-seleniumlibrary==5.1.3 23:12:32 robotframework-sshlibrary==3.8.0 23:12:32 scapy==2.5.0 23:12:32 scp==0.14.5 23:12:32 selenium==3.141.0 23:12:32 six==1.16.0 23:12:32 soupsieve==2.3.2.post1 23:12:32 urllib3==1.26.18 23:12:32 waitress==2.0.0 23:12:32 WebOb==1.8.7 23:12:32 WebTest==3.0.0 23:12:32 zipp==3.6.0 23:12:32 ++ mkdir -p /tmp/tmp.rVU7Gi2IXs/src/onap 23:12:32 ++ rm -rf /tmp/tmp.rVU7Gi2IXs/src/onap/testsuite 23:12:32 ++ python3 -m pip install -qq --upgrade --extra-index-url=https://nexus3.onap.org/repository/PyPi.staging/simple 'robotframework-onap==0.6.0.*' --pre 23:12:39 ++ echo 'Installing python confluent-kafka library' 23:12:39 Installing python confluent-kafka library 23:12:39 ++ python3 -m pip install -qq confluent-kafka 23:12:39 ++ echo 'Uninstall docker-py and reinstall docker.' 23:12:39 Uninstall docker-py and reinstall docker. 23:12:39 ++ python3 -m pip uninstall -y -qq docker 23:12:40 ++ python3 -m pip install -U -qq docker 23:12:41 ++ python3 -m pip -qq freeze 23:12:41 bcrypt==4.0.1 23:12:41 beautifulsoup4==4.12.3 23:12:41 bitarray==2.9.2 23:12:41 certifi==2024.2.2 23:12:41 cffi==1.15.1 23:12:41 charset-normalizer==2.0.12 23:12:41 confluent-kafka==2.3.0 23:12:41 cryptography==40.0.2 23:12:41 decorator==5.1.1 23:12:41 deepdiff==5.7.0 23:12:41 dnspython==2.2.1 23:12:41 docker==5.0.3 23:12:41 elasticsearch==7.17.9 23:12:41 elasticsearch-dsl==7.4.1 23:12:41 enum34==1.1.10 23:12:41 future==1.0.0 23:12:41 idna==3.6 23:12:41 importlib-resources==5.4.0 23:12:41 ipaddr==2.2.0 23:12:41 isodate==0.6.1 23:12:41 Jinja2==3.0.3 23:12:41 jmespath==0.10.0 23:12:41 jsonpatch==1.32 23:12:41 jsonpath-rw==1.4.0 23:12:41 jsonpointer==2.3 23:12:41 kafka-python==2.0.2 23:12:41 lxml==5.1.0 23:12:41 MarkupSafe==2.0.1 23:12:41 more-itertools==5.0.0 23:12:41 netaddr==0.8.0 23:12:41 netifaces==0.11.0 23:12:41 odltools==0.1.28 23:12:41 ordered-set==4.0.2 23:12:41 paramiko==3.4.0 23:12:41 pbr==6.0.0 23:12:41 pkg_resources==0.0.0 23:12:41 ply==3.11 23:12:41 protobuf==3.19.6 23:12:41 pyang==2.6.0 23:12:41 pyangbind==0.8.1 23:12:41 pycparser==2.21 23:12:41 pyhocon==0.3.60 23:12:41 PyNaCl==1.5.0 23:12:41 pyparsing==3.1.1 23:12:41 python-dateutil==2.9.0.post0 23:12:41 PyYAML==6.0.1 23:12:41 regex==2023.8.8 23:12:41 requests==2.27.1 23:12:41 robotframework==6.1.1 23:12:41 robotframework-httplibrary==0.4.2 23:12:41 robotframework-onap==0.6.0.dev105 23:12:41 robotframework-pythonlibcore==3.0.0 23:12:41 robotframework-requests==0.9.4 23:12:41 robotframework-selenium2library==3.0.0 23:12:41 robotframework-seleniumlibrary==5.1.3 23:12:41 robotframework-sshlibrary==3.8.0 23:12:41 robotlibcore-temp==1.0.2 23:12:41 scapy==2.5.0 23:12:41 scp==0.14.5 23:12:41 selenium==3.141.0 23:12:41 six==1.16.0 23:12:41 soupsieve==2.3.2.post1 23:12:41 urllib3==1.26.18 23:12:41 waitress==2.0.0 23:12:41 WebOb==1.8.7 23:12:41 websocket-client==1.3.1 23:12:41 WebTest==3.0.0 23:12:41 zipp==3.6.0 23:12:41 ++ uname 23:12:41 ++ grep -q Linux 23:12:41 ++ sudo apt-get -y -qq install libxml2-utils 23:12:41 + load_set 23:12:41 + _setopts=ehuxB 23:12:41 ++ echo braceexpand:hashall:interactive-comments:nounset:xtrace 23:12:41 ++ tr : ' ' 23:12:41 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:41 + set +o braceexpand 23:12:41 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:41 + set +o hashall 23:12:41 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:41 + set +o interactive-comments 23:12:41 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:41 + set +o nounset 23:12:41 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:41 + set +o xtrace 23:12:41 ++ echo ehuxB 23:12:41 ++ sed 's/./& /g' 23:12:41 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:12:41 + set +e 23:12:41 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:12:41 + set +h 23:12:41 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:12:41 + set +u 23:12:41 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:12:41 + set +x 23:12:41 + source_safely /tmp/tmp.rVU7Gi2IXs/bin/activate 23:12:41 + '[' -z /tmp/tmp.rVU7Gi2IXs/bin/activate ']' 23:12:41 + relax_set 23:12:41 + set +e 23:12:41 + set +o pipefail 23:12:41 + . /tmp/tmp.rVU7Gi2IXs/bin/activate 23:12:41 ++ deactivate nondestructive 23:12:41 ++ '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin ']' 23:12:41 ++ PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 23:12:41 ++ export PATH 23:12:41 ++ unset _OLD_VIRTUAL_PATH 23:12:41 ++ '[' -n '' ']' 23:12:41 ++ '[' -n /bin/bash -o -n '' ']' 23:12:41 ++ hash -r 23:12:41 ++ '[' -n '' ']' 23:12:41 ++ unset VIRTUAL_ENV 23:12:41 ++ '[' '!' nondestructive = nondestructive ']' 23:12:41 ++ VIRTUAL_ENV=/tmp/tmp.rVU7Gi2IXs 23:12:41 ++ export VIRTUAL_ENV 23:12:41 ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 23:12:41 ++ PATH=/tmp/tmp.rVU7Gi2IXs/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/w/workspace/policy-pap-master-project-csit-pap/csit:/w/workspace/policy-pap-master-project-csit-pap/scripts:/bin 23:12:41 ++ export PATH 23:12:41 ++ '[' -n '' ']' 23:12:41 ++ '[' -z '' ']' 23:12:41 ++ _OLD_VIRTUAL_PS1='(tmp.rVU7Gi2IXs) ' 23:12:41 ++ '[' 'x(tmp.rVU7Gi2IXs) ' '!=' x ']' 23:12:41 ++ PS1='(tmp.rVU7Gi2IXs) (tmp.rVU7Gi2IXs) ' 23:12:41 ++ export PS1 23:12:41 ++ '[' -n /bin/bash -o -n '' ']' 23:12:41 ++ hash -r 23:12:41 + load_set 23:12:41 + _setopts=hxB 23:12:41 ++ echo braceexpand:hashall:interactive-comments:xtrace 23:12:41 ++ tr : ' ' 23:12:41 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:41 + set +o braceexpand 23:12:41 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:41 + set +o hashall 23:12:41 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:41 + set +o interactive-comments 23:12:41 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:12:41 + set +o xtrace 23:12:41 ++ echo hxB 23:12:41 ++ sed 's/./& /g' 23:12:41 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:12:41 + set +h 23:12:41 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:12:41 + set +x 23:12:41 + export TEST_PLAN_DIR=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests 23:12:41 + TEST_PLAN_DIR=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests 23:12:41 + export TEST_OPTIONS= 23:12:41 + TEST_OPTIONS= 23:12:41 ++ mktemp -d 23:12:41 + WORKDIR=/tmp/tmp.h1E7S28y7V 23:12:41 + cd /tmp/tmp.h1E7S28y7V 23:12:41 + docker login -u docker -p docker nexus3.onap.org:10001 23:12:42 WARNING! Using --password via the CLI is insecure. Use --password-stdin. 23:12:42 WARNING! Your password will be stored unencrypted in /home/jenkins/.docker/config.json. 23:12:42 Configure a credential helper to remove this warning. See 23:12:42 https://docs.docker.com/engine/reference/commandline/login/#credentials-store 23:12:42 23:12:42 Login Succeeded 23:12:42 + SETUP=/w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh 23:12:42 + '[' -f /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh ']' 23:12:42 + echo 'Running setup script /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh' 23:12:42 Running setup script /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh 23:12:42 + source_safely /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh 23:12:42 + '[' -z /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh ']' 23:12:42 + relax_set 23:12:42 + set +e 23:12:42 + set +o pipefail 23:12:42 + . /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/setup-pap.sh 23:12:42 ++ source /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/node-templates.sh 23:12:42 +++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' 23:12:42 ++++ awk -F= '$1 == "defaultbranch" { print $2 }' /w/workspace/policy-pap-master-project-csit-pap/.gitreview 23:12:42 +++ GERRIT_BRANCH=master 23:12:42 +++ echo GERRIT_BRANCH=master 23:12:42 GERRIT_BRANCH=master 23:12:42 +++ rm -rf /w/workspace/policy-pap-master-project-csit-pap/models 23:12:42 +++ mkdir /w/workspace/policy-pap-master-project-csit-pap/models 23:12:42 +++ git clone -b master --single-branch https://github.com/onap/policy-models.git /w/workspace/policy-pap-master-project-csit-pap/models 23:12:42 Cloning into '/w/workspace/policy-pap-master-project-csit-pap/models'... 23:12:43 +++ export DATA=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies 23:12:43 +++ DATA=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies 23:12:43 +++ export NODETEMPLATES=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates 23:12:43 +++ NODETEMPLATES=/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates 23:12:43 +++ sed -e 's!Measurement_vGMUX!ADifferentValue!' /w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies/vCPE.policy.monitoring.input.tosca.json 23:12:43 +++ sed -e 's!"version": "1.0.0"!"version": "2.0.0"!' -e 's!"policy-version": 1!"policy-version": 2!' /w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies/vCPE.policy.monitoring.input.tosca.json 23:12:43 ++ source /w/workspace/policy-pap-master-project-csit-pap/compose/start-compose.sh apex-pdp --grafana 23:12:43 +++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' 23:12:43 +++ COMPOSE_FOLDER=/w/workspace/policy-pap-master-project-csit-pap/compose 23:12:43 +++ grafana=false 23:12:43 +++ gui=false 23:12:43 +++ [[ 2 -gt 0 ]] 23:12:43 +++ key=apex-pdp 23:12:43 +++ case $key in 23:12:43 +++ echo apex-pdp 23:12:43 apex-pdp 23:12:43 +++ component=apex-pdp 23:12:43 +++ shift 23:12:43 +++ [[ 1 -gt 0 ]] 23:12:43 +++ key=--grafana 23:12:43 +++ case $key in 23:12:43 +++ grafana=true 23:12:43 +++ shift 23:12:43 +++ [[ 0 -gt 0 ]] 23:12:43 +++ cd /w/workspace/policy-pap-master-project-csit-pap/compose 23:12:43 +++ echo 'Configuring docker compose...' 23:12:43 Configuring docker compose... 23:12:43 +++ source export-ports.sh 23:12:43 +++ source get-versions.sh 23:12:45 +++ '[' -z pap ']' 23:12:45 +++ '[' -n apex-pdp ']' 23:12:45 +++ '[' apex-pdp == logs ']' 23:12:45 +++ '[' true = true ']' 23:12:45 +++ echo 'Starting apex-pdp application with Grafana' 23:12:45 Starting apex-pdp application with Grafana 23:12:45 +++ docker-compose up -d apex-pdp grafana 23:12:45 Creating network "compose_default" with the default driver 23:12:46 Pulling prometheus (nexus3.onap.org:10001/prom/prometheus:latest)... 23:12:46 latest: Pulling from prom/prometheus 23:12:48 Digest: sha256:bc1794e85c9e00293351b967efa267ce6af1c824ac875a9d0c7ac84700a8b53e 23:12:48 Status: Downloaded newer image for nexus3.onap.org:10001/prom/prometheus:latest 23:12:48 Pulling grafana (nexus3.onap.org:10001/grafana/grafana:latest)... 23:12:49 latest: Pulling from grafana/grafana 23:12:53 Digest: sha256:8640e5038e83ca4554ed56b9d76375158bcd51580238c6f5d8adaf3f20dd5379 23:12:53 Status: Downloaded newer image for nexus3.onap.org:10001/grafana/grafana:latest 23:12:53 Pulling mariadb (nexus3.onap.org:10001/mariadb:10.10.2)... 23:12:53 10.10.2: Pulling from mariadb 23:12:58 Digest: sha256:bfc25a68e113de43d0d112f5a7126df8e278579c3224e3923359e1c1d8d5ce6e 23:12:58 Status: Downloaded newer image for nexus3.onap.org:10001/mariadb:10.10.2 23:12:58 Pulling simulator (nexus3.onap.org:10001/onap/policy-models-simulator:3.1.2-SNAPSHOT)... 23:12:58 3.1.2-SNAPSHOT: Pulling from onap/policy-models-simulator 23:13:02 Digest: sha256:5772a5c551b30d73f901debb8dc38f305559b920e248a9ccb1dba3b880278a13 23:13:02 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-models-simulator:3.1.2-SNAPSHOT 23:13:02 Pulling zookeeper (confluentinc/cp-zookeeper:latest)... 23:13:03 latest: Pulling from confluentinc/cp-zookeeper 23:13:13 Digest: sha256:9babd1c0beaf93189982bdbb9fe4bf194a2730298b640c057817746c19838866 23:13:13 Status: Downloaded newer image for confluentinc/cp-zookeeper:latest 23:13:13 Pulling kafka (confluentinc/cp-kafka:latest)... 23:13:14 latest: Pulling from confluentinc/cp-kafka 23:13:16 Digest: sha256:24cdd3a7fa89d2bed150560ebea81ff1943badfa61e51d66bb541a6b0d7fb047 23:13:16 Status: Downloaded newer image for confluentinc/cp-kafka:latest 23:13:16 Pulling policy-db-migrator (nexus3.onap.org:10001/onap/policy-db-migrator:3.1.2-SNAPSHOT)... 23:13:16 3.1.2-SNAPSHOT: Pulling from onap/policy-db-migrator 23:13:25 Digest: sha256:ed573692302e5a28aa3b51a60adbd7641290e273719edd44bc9ff784d1569efa 23:13:25 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-db-migrator:3.1.2-SNAPSHOT 23:13:25 Pulling api (nexus3.onap.org:10001/onap/policy-api:3.1.2-SNAPSHOT)... 23:13:26 3.1.2-SNAPSHOT: Pulling from onap/policy-api 23:13:27 Digest: sha256:fdc9aa26830be0af882248f5f576f0e9466b8e17ff432e8618d01432efa85803 23:13:27 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-api:3.1.2-SNAPSHOT 23:13:27 Pulling pap (nexus3.onap.org:10001/onap/policy-pap:3.1.2-SNAPSHOT)... 23:13:27 3.1.2-SNAPSHOT: Pulling from onap/policy-pap 23:13:29 Digest: sha256:448850bc9066413f6555e9c62d97da12eaa2c454a1304262987462aae46f4676 23:13:29 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-pap:3.1.2-SNAPSHOT 23:13:29 Pulling apex-pdp (nexus3.onap.org:10001/onap/policy-apex-pdp:3.1.2-SNAPSHOT)... 23:13:29 3.1.2-SNAPSHOT: Pulling from onap/policy-apex-pdp 23:13:38 Digest: sha256:8670bcaff746ebc196cef9125561eb167e1e65c7e2f8d374c0d8834d57564da4 23:13:38 Status: Downloaded newer image for nexus3.onap.org:10001/onap/policy-apex-pdp:3.1.2-SNAPSHOT 23:13:38 Creating mariadb ... 23:13:38 Creating simulator ... 23:13:38 Creating prometheus ... 23:13:38 Creating compose_zookeeper_1 ... 23:13:54 Creating prometheus ... done 23:13:54 Creating grafana ... 23:13:55 Creating compose_zookeeper_1 ... done 23:13:55 Creating kafka ... 23:13:56 Creating simulator ... done 23:13:56 Creating grafana ... done 23:13:58 Creating mariadb ... done 23:13:58 Creating policy-db-migrator ... 23:13:59 Creating policy-db-migrator ... done 23:13:59 Creating policy-api ... 23:14:00 Creating policy-api ... done 23:14:00 Creating kafka ... done 23:14:00 Creating policy-pap ... 23:14:01 Creating policy-pap ... done 23:14:01 Creating policy-apex-pdp ... 23:14:03 Creating policy-apex-pdp ... done 23:14:03 +++ echo 'Prometheus server: http://localhost:30259' 23:14:03 Prometheus server: http://localhost:30259 23:14:03 +++ echo 'Grafana server: http://localhost:30269' 23:14:03 Grafana server: http://localhost:30269 23:14:03 +++ cd /w/workspace/policy-pap-master-project-csit-pap 23:14:03 ++ sleep 10 23:14:13 ++ unset http_proxy https_proxy 23:14:13 ++ bash /w/workspace/policy-pap-master-project-csit-pap/csit/resources/scripts/wait_for_rest.sh localhost 30003 23:14:13 Waiting for REST to come up on localhost port 30003... 23:14:13 NAMES STATUS 23:14:13 policy-apex-pdp Up 10 seconds 23:14:13 policy-pap Up 11 seconds 23:14:13 policy-api Up 13 seconds 23:14:13 kafka Up 12 seconds 23:14:13 grafana Up 16 seconds 23:14:13 compose_zookeeper_1 Up 18 seconds 23:14:13 mariadb Up 15 seconds 23:14:13 prometheus Up 19 seconds 23:14:13 simulator Up 17 seconds 23:14:18 NAMES STATUS 23:14:18 policy-apex-pdp Up 15 seconds 23:14:18 policy-pap Up 16 seconds 23:14:18 policy-api Up 18 seconds 23:14:18 kafka Up 17 seconds 23:14:18 grafana Up 21 seconds 23:14:18 compose_zookeeper_1 Up 23 seconds 23:14:18 mariadb Up 20 seconds 23:14:18 prometheus Up 24 seconds 23:14:18 simulator Up 22 seconds 23:14:23 NAMES STATUS 23:14:23 policy-apex-pdp Up 20 seconds 23:14:23 policy-pap Up 21 seconds 23:14:23 policy-api Up 23 seconds 23:14:23 kafka Up 22 seconds 23:14:23 grafana Up 26 seconds 23:14:23 compose_zookeeper_1 Up 28 seconds 23:14:23 mariadb Up 25 seconds 23:14:23 prometheus Up 29 seconds 23:14:23 simulator Up 27 seconds 23:14:28 NAMES STATUS 23:14:28 policy-apex-pdp Up 25 seconds 23:14:28 policy-pap Up 26 seconds 23:14:28 policy-api Up 28 seconds 23:14:28 kafka Up 27 seconds 23:14:28 grafana Up 31 seconds 23:14:28 compose_zookeeper_1 Up 33 seconds 23:14:28 mariadb Up 30 seconds 23:14:28 prometheus Up 34 seconds 23:14:28 simulator Up 32 seconds 23:14:33 NAMES STATUS 23:14:33 policy-apex-pdp Up 30 seconds 23:14:33 policy-pap Up 31 seconds 23:14:33 policy-api Up 33 seconds 23:14:33 kafka Up 32 seconds 23:14:33 grafana Up 36 seconds 23:14:33 compose_zookeeper_1 Up 38 seconds 23:14:33 mariadb Up 35 seconds 23:14:33 prometheus Up 39 seconds 23:14:33 simulator Up 37 seconds 23:14:38 NAMES STATUS 23:14:38 policy-apex-pdp Up 35 seconds 23:14:38 policy-pap Up 36 seconds 23:14:38 policy-api Up 38 seconds 23:14:38 kafka Up 37 seconds 23:14:38 grafana Up 41 seconds 23:14:38 compose_zookeeper_1 Up 43 seconds 23:14:38 mariadb Up 40 seconds 23:14:38 prometheus Up 44 seconds 23:14:38 simulator Up 42 seconds 23:14:38 ++ export 'SUITES=pap-test.robot 23:14:38 pap-slas.robot' 23:14:38 ++ SUITES='pap-test.robot 23:14:38 pap-slas.robot' 23:14:38 ++ ROBOT_VARIABLES='-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 23:14:38 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates' 23:14:38 + load_set 23:14:38 + _setopts=hxB 23:14:38 ++ echo braceexpand:hashall:interactive-comments:xtrace 23:14:38 ++ tr : ' ' 23:14:38 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:14:38 + set +o braceexpand 23:14:38 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:14:38 + set +o hashall 23:14:38 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:14:38 + set +o interactive-comments 23:14:38 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:14:38 + set +o xtrace 23:14:38 ++ echo hxB 23:14:38 ++ sed 's/./& /g' 23:14:38 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:14:38 + set +h 23:14:38 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:14:38 + set +x 23:14:38 + docker_stats 23:14:38 + tee /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap/_sysinfo-1-after-setup.txt 23:14:38 ++ uname -s 23:14:38 + '[' Linux == Darwin ']' 23:14:38 + sh -c 'top -bn1 | head -3' 23:14:38 top - 23:14:38 up 4 min, 0 users, load average: 3.74, 1.61, 0.63 23:14:38 Tasks: 208 total, 1 running, 131 sleeping, 0 stopped, 0 zombie 23:14:38 %Cpu(s): 13.6 us, 2.8 sy, 0.0 ni, 79.5 id, 4.0 wa, 0.0 hi, 0.1 si, 0.1 st 23:14:38 + echo 23:14:38 + sh -c 'free -h' 23:14:38 23:14:38 total used free shared buff/cache available 23:14:38 Mem: 31G 2.7G 22G 1.3M 6.2G 28G 23:14:38 Swap: 1.0G 0B 1.0G 23:14:38 + echo 23:14:38 + docker ps --format 'table {{ .Names }}\t{{ .Status }}' 23:14:38 23:14:38 NAMES STATUS 23:14:38 policy-apex-pdp Up 35 seconds 23:14:38 policy-pap Up 36 seconds 23:14:38 policy-api Up 38 seconds 23:14:38 kafka Up 37 seconds 23:14:38 grafana Up 41 seconds 23:14:38 compose_zookeeper_1 Up 43 seconds 23:14:38 mariadb Up 40 seconds 23:14:38 prometheus Up 44 seconds 23:14:38 simulator Up 42 seconds 23:14:38 + echo 23:14:38 + docker stats --no-stream 23:14:38 23:14:41 CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 23:14:41 fbbe89fd2aa8 policy-apex-pdp 1.73% 184.8MiB / 31.41GiB 0.57% 7.3kB / 6.93kB 0B / 0B 48 23:14:41 1e5fc700def9 policy-pap 36.64% 514.9MiB / 31.41GiB 1.60% 30.9kB / 33.1kB 0B / 153MB 62 23:14:41 609f74000387 policy-api 86.13% 462.2MiB / 31.41GiB 1.44% 1MB / 737kB 0B / 0B 54 23:14:41 336a00f9700a kafka 40.29% 400.5MiB / 31.41GiB 1.25% 72.8kB / 76kB 0B / 508kB 85 23:14:41 ba493c6b9155 grafana 0.11% 57.45MiB / 31.41GiB 0.18% 18.8kB / 3.35kB 0B / 24MB 21 23:14:41 c7c563d3e376 compose_zookeeper_1 0.10% 100.9MiB / 31.41GiB 0.31% 57kB / 50.1kB 0B / 422kB 60 23:14:41 3ee38ed25341 mariadb 0.02% 102.2MiB / 31.41GiB 0.32% 996kB / 1.19MB 11MB / 57.9MB 39 23:14:41 b6a554ddd50c prometheus 2.32% 20.27MiB / 31.41GiB 0.06% 28kB / 934B 156kB / 0B 13 23:14:41 81f72422b0cb simulator 0.07% 121.1MiB / 31.41GiB 0.38% 1.23kB / 0B 0B / 0B 76 23:14:41 + echo 23:14:41 23:14:41 + cd /tmp/tmp.h1E7S28y7V 23:14:41 + echo 'Reading the testplan:' 23:14:41 Reading the testplan: 23:14:41 + echo 'pap-test.robot 23:14:41 pap-slas.robot' 23:14:41 + egrep -v '(^[[:space:]]*#|^[[:space:]]*$)' 23:14:41 + sed 's|^|/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/|' 23:14:41 + cat testplan.txt 23:14:41 /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot 23:14:41 /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot 23:14:41 ++ xargs 23:14:41 + SUITES='/w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot' 23:14:41 + echo 'ROBOT_VARIABLES=-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 23:14:41 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates' 23:14:41 ROBOT_VARIABLES=-v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 23:14:41 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates 23:14:41 + echo 'Starting Robot test suites /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot ...' 23:14:41 Starting Robot test suites /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot ... 23:14:41 + relax_set 23:14:41 + set +e 23:14:41 + set +o pipefail 23:14:41 + python3 -m robot.run -N pap -v WORKSPACE:/tmp -v POLICY_PAP_IP:localhost:30003 -v POLICY_API_IP:localhost:30002 -v PROMETHEUS_IP:localhost:30259 -v DATA:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/policies -v NODETEMPLATES:/w/workspace/policy-pap-master-project-csit-pap/models/models-examples/src/main/resources/nodetemplates /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-test.robot /w/workspace/policy-pap-master-project-csit-pap/csit/resources/tests/pap-slas.robot 23:14:41 ============================================================================== 23:14:41 pap 23:14:41 ============================================================================== 23:14:41 pap.Pap-Test 23:14:41 ============================================================================== 23:14:42 LoadPolicy :: Create a policy named 'onap.restart.tca' and version... | PASS | 23:14:42 ------------------------------------------------------------------------------ 23:14:42 LoadPolicyWithMetadataSet :: Create a policy named 'operational.ap... | PASS | 23:14:42 ------------------------------------------------------------------------------ 23:14:43 LoadNodeTemplates :: Create node templates in database using speci... | PASS | 23:14:43 ------------------------------------------------------------------------------ 23:14:43 Healthcheck :: Verify policy pap health check | PASS | 23:14:43 ------------------------------------------------------------------------------ 23:15:03 Consolidated Healthcheck :: Verify policy consolidated health check | PASS | 23:15:03 ------------------------------------------------------------------------------ 23:15:04 Metrics :: Verify policy pap is exporting prometheus metrics | PASS | 23:15:04 ------------------------------------------------------------------------------ 23:15:04 AddPdpGroup :: Add a new PdpGroup named 'testGroup' in the policy ... | PASS | 23:15:04 ------------------------------------------------------------------------------ 23:15:04 QueryPdpGroupsBeforeActivation :: Verify PdpGroups before activation | PASS | 23:15:04 ------------------------------------------------------------------------------ 23:15:05 ActivatePdpGroup :: Change the state of PdpGroup named 'testGroup'... | PASS | 23:15:05 ------------------------------------------------------------------------------ 23:15:05 QueryPdpGroupsAfterActivation :: Verify PdpGroups after activation | PASS | 23:15:05 ------------------------------------------------------------------------------ 23:15:05 DeployPdpGroups :: Deploy policies in PdpGroups | PASS | 23:15:05 ------------------------------------------------------------------------------ 23:15:05 QueryPdpGroupsAfterDeploy :: Verify PdpGroups after undeploy | PASS | 23:15:05 ------------------------------------------------------------------------------ 23:15:05 QueryPolicyAuditAfterDeploy :: Verify policy audit record after de... | PASS | 23:15:05 ------------------------------------------------------------------------------ 23:15:05 QueryPolicyAuditWithMetadataSetAfterDeploy :: Verify policy audit ... | PASS | 23:15:05 ------------------------------------------------------------------------------ 23:15:06 UndeployPolicy :: Undeploy a policy named 'onap.restart.tca' from ... | PASS | 23:15:06 ------------------------------------------------------------------------------ 23:15:06 UndeployPolicyWithMetadataSet :: Undeploy a policy named 'operatio... | PASS | 23:15:06 ------------------------------------------------------------------------------ 23:15:06 QueryPdpGroupsAfterUndeploy :: Verify PdpGroups after undeploy | PASS | 23:15:06 ------------------------------------------------------------------------------ 23:15:26 QueryPolicyAuditAfterUnDeploy :: Verify policy audit record after ... | PASS | 23:15:26 ------------------------------------------------------------------------------ 23:15:27 QueryPolicyAuditWithMetadataSetAfterUnDeploy :: Verify policy audi... | PASS | 23:15:27 ------------------------------------------------------------------------------ 23:15:27 DeactivatePdpGroup :: Change the state of PdpGroup named 'testGrou... | PASS | 23:15:27 ------------------------------------------------------------------------------ 23:15:27 DeletePdpGroups :: Delete the PdpGroup named 'testGroup' from poli... | PASS | 23:15:27 ------------------------------------------------------------------------------ 23:15:27 QueryPdpGroupsAfterDelete :: Verify PdpGroups after delete | PASS | 23:15:27 ------------------------------------------------------------------------------ 23:15:27 pap.Pap-Test | PASS | 23:15:27 22 tests, 22 passed, 0 failed 23:15:27 ============================================================================== 23:15:27 pap.Pap-Slas 23:15:27 ============================================================================== 23:16:27 WaitForPrometheusServer :: Wait for Prometheus server to gather al... | PASS | 23:16:27 ------------------------------------------------------------------------------ 23:16:27 ValidateResponseTimeForHealthcheck :: Validate component healthche... | PASS | 23:16:27 ------------------------------------------------------------------------------ 23:16:27 ValidateResponseTimeForSystemHealthcheck :: Validate if system hea... | PASS | 23:16:27 ------------------------------------------------------------------------------ 23:16:27 ValidateResponseTimeQueryPolicyAudit :: Validate query audits resp... | PASS | 23:16:27 ------------------------------------------------------------------------------ 23:16:27 ValidateResponseTimeUpdateGroup :: Validate pdps/group response time | PASS | 23:16:27 ------------------------------------------------------------------------------ 23:16:27 ValidatePolicyDeploymentTime :: Check if deployment of policy is u... | PASS | 23:16:27 ------------------------------------------------------------------------------ 23:16:27 ValidateResponseTimeDeletePolicy :: Check if undeployment of polic... | PASS | 23:16:27 ------------------------------------------------------------------------------ 23:16:27 ValidateResponseTimeDeleteGroup :: Validate delete group response ... | PASS | 23:16:27 ------------------------------------------------------------------------------ 23:16:27 pap.Pap-Slas | PASS | 23:16:27 8 tests, 8 passed, 0 failed 23:16:27 ============================================================================== 23:16:27 pap | PASS | 23:16:27 30 tests, 30 passed, 0 failed 23:16:27 ============================================================================== 23:16:27 Output: /tmp/tmp.h1E7S28y7V/output.xml 23:16:27 Log: /tmp/tmp.h1E7S28y7V/log.html 23:16:27 Report: /tmp/tmp.h1E7S28y7V/report.html 23:16:27 + RESULT=0 23:16:27 + load_set 23:16:27 + _setopts=hxB 23:16:27 ++ echo braceexpand:hashall:interactive-comments:xtrace 23:16:27 ++ tr : ' ' 23:16:27 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:16:27 + set +o braceexpand 23:16:27 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:16:27 + set +o hashall 23:16:27 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:16:27 + set +o interactive-comments 23:16:27 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:16:27 + set +o xtrace 23:16:27 ++ echo hxB 23:16:27 ++ sed 's/./& /g' 23:16:27 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:16:27 + set +h 23:16:27 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:16:27 + set +x 23:16:27 + echo 'RESULT: 0' 23:16:27 RESULT: 0 23:16:27 + exit 0 23:16:27 + on_exit 23:16:27 + rc=0 23:16:27 + [[ -n /w/workspace/policy-pap-master-project-csit-pap ]] 23:16:27 + docker ps --format 'table {{ .Names }}\t{{ .Status }}' 23:16:27 NAMES STATUS 23:16:27 policy-apex-pdp Up 2 minutes 23:16:27 policy-pap Up 2 minutes 23:16:27 policy-api Up 2 minutes 23:16:27 kafka Up 2 minutes 23:16:27 grafana Up 2 minutes 23:16:27 compose_zookeeper_1 Up 2 minutes 23:16:27 mariadb Up 2 minutes 23:16:27 prometheus Up 2 minutes 23:16:27 simulator Up 2 minutes 23:16:27 + docker_stats 23:16:27 ++ uname -s 23:16:27 + '[' Linux == Darwin ']' 23:16:27 + sh -c 'top -bn1 | head -3' 23:16:28 top - 23:16:28 up 6 min, 0 users, load average: 0.81, 1.24, 0.60 23:16:28 Tasks: 197 total, 2 running, 129 sleeping, 0 stopped, 0 zombie 23:16:28 %Cpu(s): 10.8 us, 2.1 sy, 0.0 ni, 84.0 id, 3.1 wa, 0.0 hi, 0.1 si, 0.1 st 23:16:28 + echo 23:16:28 23:16:28 + sh -c 'free -h' 23:16:28 total used free shared buff/cache available 23:16:28 Mem: 31G 2.7G 22G 1.3M 6.2G 28G 23:16:28 Swap: 1.0G 0B 1.0G 23:16:28 + echo 23:16:28 23:16:28 + docker ps --format 'table {{ .Names }}\t{{ .Status }}' 23:16:28 NAMES STATUS 23:16:28 policy-apex-pdp Up 2 minutes 23:16:28 policy-pap Up 2 minutes 23:16:28 policy-api Up 2 minutes 23:16:28 kafka Up 2 minutes 23:16:28 grafana Up 2 minutes 23:16:28 compose_zookeeper_1 Up 2 minutes 23:16:28 mariadb Up 2 minutes 23:16:28 prometheus Up 2 minutes 23:16:28 simulator Up 2 minutes 23:16:28 + echo 23:16:28 23:16:28 + docker stats --no-stream 23:16:30 CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 23:16:30 fbbe89fd2aa8 policy-apex-pdp 1.69% 188.3MiB / 31.41GiB 0.59% 57.1kB / 91.5kB 0B / 0B 52 23:16:30 1e5fc700def9 policy-pap 0.46% 502.6MiB / 31.41GiB 1.56% 2.33MB / 801kB 0B / 153MB 65 23:16:30 609f74000387 policy-api 0.09% 516.7MiB / 31.41GiB 1.61% 2.49MB / 1.26MB 0B / 0B 55 23:16:30 336a00f9700a kafka 9.48% 399.2MiB / 31.41GiB 1.24% 241kB / 217kB 0B / 606kB 85 23:16:30 ba493c6b9155 grafana 0.15% 65.43MiB / 31.41GiB 0.20% 20.2kB / 4.9kB 0B / 24MB 21 23:16:30 c7c563d3e376 compose_zookeeper_1 0.08% 101.8MiB / 31.41GiB 0.32% 59.9kB / 51.6kB 0B / 422kB 60 23:16:30 3ee38ed25341 mariadb 0.01% 103.8MiB / 31.41GiB 0.32% 1.95MB / 4.77MB 11MB / 58.3MB 28 23:16:30 b6a554ddd50c prometheus 0.00% 25.25MiB / 31.41GiB 0.08% 167kB / 10.8kB 156kB / 0B 13 23:16:30 81f72422b0cb simulator 0.06% 121.2MiB / 31.41GiB 0.38% 1.58kB / 0B 0B / 0B 78 23:16:30 + echo 23:16:30 23:16:30 + source_safely /w/workspace/policy-pap-master-project-csit-pap/compose/stop-compose.sh 23:16:30 + '[' -z /w/workspace/policy-pap-master-project-csit-pap/compose/stop-compose.sh ']' 23:16:30 + relax_set 23:16:30 + set +e 23:16:30 + set +o pipefail 23:16:30 + . /w/workspace/policy-pap-master-project-csit-pap/compose/stop-compose.sh 23:16:30 ++ echo 'Shut down started!' 23:16:30 Shut down started! 23:16:30 ++ '[' -z /w/workspace/policy-pap-master-project-csit-pap ']' 23:16:30 ++ COMPOSE_FOLDER=/w/workspace/policy-pap-master-project-csit-pap/compose 23:16:30 ++ cd /w/workspace/policy-pap-master-project-csit-pap/compose 23:16:30 ++ source export-ports.sh 23:16:30 ++ source get-versions.sh 23:16:32 ++ echo 'Collecting logs from docker compose containers...' 23:16:32 Collecting logs from docker compose containers... 23:16:32 ++ docker-compose logs 23:16:33 ++ cat docker_compose.log 23:16:33 Attaching to policy-apex-pdp, policy-pap, policy-api, policy-db-migrator, kafka, grafana, compose_zookeeper_1, mariadb, prometheus, simulator 23:16:33 zookeeper_1 | ===> User 23:16:33 zookeeper_1 | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) 23:16:33 zookeeper_1 | ===> Configuring ... 23:16:33 zookeeper_1 | ===> Running preflight checks ... 23:16:33 zookeeper_1 | ===> Check if /var/lib/zookeeper/data is writable ... 23:16:33 zookeeper_1 | ===> Check if /var/lib/zookeeper/log is writable ... 23:16:33 zookeeper_1 | ===> Launching ... 23:16:33 zookeeper_1 | ===> Launching zookeeper ... 23:16:33 zookeeper_1 | [2024-03-01 23:13:58,586] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:33 zookeeper_1 | [2024-03-01 23:13:58,593] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:33 zookeeper_1 | [2024-03-01 23:13:58,593] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:33 zookeeper_1 | [2024-03-01 23:13:58,593] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:33 zookeeper_1 | [2024-03-01 23:13:58,593] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:33 zookeeper_1 | [2024-03-01 23:13:58,594] INFO autopurge.snapRetainCount set to 3 (org.apache.zookeeper.server.DatadirCleanupManager) 23:16:33 zookeeper_1 | [2024-03-01 23:13:58,594] INFO autopurge.purgeInterval set to 0 (org.apache.zookeeper.server.DatadirCleanupManager) 23:16:33 zookeeper_1 | [2024-03-01 23:13:58,594] INFO Purge task is not scheduled. (org.apache.zookeeper.server.DatadirCleanupManager) 23:16:33 zookeeper_1 | [2024-03-01 23:13:58,594] WARN Either no config or no quorum defined in config, running in standalone mode (org.apache.zookeeper.server.quorum.QuorumPeerMain) 23:16:33 zookeeper_1 | [2024-03-01 23:13:58,595] INFO Log4j 1.2 jmx support not found; jmx disabled. (org.apache.zookeeper.jmx.ManagedUtil) 23:16:33 zookeeper_1 | [2024-03-01 23:13:58,596] INFO Reading configuration from: /etc/kafka/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:33 zookeeper_1 | [2024-03-01 23:13:58,596] INFO clientPortAddress is 0.0.0.0:2181 (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:33 zookeeper_1 | [2024-03-01 23:13:58,596] INFO secureClientPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:33 zookeeper_1 | [2024-03-01 23:13:58,596] INFO observerMasterPort is not set (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:33 zookeeper_1 | [2024-03-01 23:13:58,596] INFO metricsProvider.className is org.apache.zookeeper.metrics.impl.DefaultMetricsProvider (org.apache.zookeeper.server.quorum.QuorumPeerConfig) 23:16:33 zookeeper_1 | [2024-03-01 23:13:58,596] INFO Starting server (org.apache.zookeeper.server.ZooKeeperServerMain) 23:16:33 zookeeper_1 | [2024-03-01 23:13:58,608] INFO ServerMetrics initialized with provider org.apache.zookeeper.metrics.impl.DefaultMetricsProvider@26275bef (org.apache.zookeeper.server.ServerMetrics) 23:16:33 zookeeper_1 | [2024-03-01 23:13:58,611] INFO ACL digest algorithm is: SHA1 (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) 23:16:33 zookeeper_1 | [2024-03-01 23:13:58,611] INFO zookeeper.DigestAuthenticationProvider.enabled = true (org.apache.zookeeper.server.auth.DigestAuthenticationProvider) 23:16:33 zookeeper_1 | [2024-03-01 23:13:58,613] INFO zookeeper.snapshot.trust.empty : false (org.apache.zookeeper.server.persistence.FileTxnSnapLog) 23:16:33 zookeeper_1 | [2024-03-01 23:13:58,622] INFO (org.apache.zookeeper.server.ZooKeeperServer) 23:16:33 zookeeper_1 | [2024-03-01 23:13:58,623] INFO ______ _ (org.apache.zookeeper.server.ZooKeeperServer) 23:16:33 zookeeper_1 | [2024-03-01 23:13:58,623] INFO |___ / | | (org.apache.zookeeper.server.ZooKeeperServer) 23:16:33 zookeeper_1 | [2024-03-01 23:13:58,623] INFO / / ___ ___ | | __ ___ ___ _ __ ___ _ __ (org.apache.zookeeper.server.ZooKeeperServer) 23:16:33 zookeeper_1 | [2024-03-01 23:13:58,623] INFO / / / _ \ / _ \ | |/ / / _ \ / _ \ | '_ \ / _ \ | '__| (org.apache.zookeeper.server.ZooKeeperServer) 23:16:33 zookeeper_1 | [2024-03-01 23:13:58,623] INFO / /__ | (_) | | (_) | | < | __/ | __/ | |_) | | __/ | | (org.apache.zookeeper.server.ZooKeeperServer) 23:16:33 zookeeper_1 | [2024-03-01 23:13:58,623] INFO /_____| \___/ \___/ |_|\_\ \___| \___| | .__/ \___| |_| (org.apache.zookeeper.server.ZooKeeperServer) 23:16:33 zookeeper_1 | [2024-03-01 23:13:58,623] INFO | | (org.apache.zookeeper.server.ZooKeeperServer) 23:16:33 zookeeper_1 | [2024-03-01 23:13:58,623] INFO |_| (org.apache.zookeeper.server.ZooKeeperServer) 23:16:33 zookeeper_1 | [2024-03-01 23:13:58,623] INFO (org.apache.zookeeper.server.ZooKeeperServer) 23:16:33 grafana | logger=settings t=2024-03-01T23:13:57.106397042Z level=info msg="Starting Grafana" version=10.3.3 commit=252761264e22ece57204b327f9130d3b44592c01 branch=HEAD compiled=2024-03-01T23:13:57Z 23:16:33 grafana | logger=settings t=2024-03-01T23:13:57.106661414Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini 23:16:33 grafana | logger=settings t=2024-03-01T23:13:57.106675304Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini 23:16:33 grafana | logger=settings t=2024-03-01T23:13:57.106679464Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana" 23:16:33 grafana | logger=settings t=2024-03-01T23:13:57.106705894Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana" 23:16:33 grafana | logger=settings t=2024-03-01T23:13:57.106719914Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins" 23:16:33 grafana | logger=settings t=2024-03-01T23:13:57.106723094Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning" 23:16:33 grafana | logger=settings t=2024-03-01T23:13:57.106750524Z level=info msg="Config overridden from command line" arg="default.log.mode=console" 23:16:33 grafana | logger=settings t=2024-03-01T23:13:57.106761004Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana" 23:16:33 grafana | logger=settings t=2024-03-01T23:13:57.106764564Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana" 23:16:33 grafana | logger=settings t=2024-03-01T23:13:57.106767514Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins" 23:16:33 grafana | logger=settings t=2024-03-01T23:13:57.106771334Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning" 23:16:33 grafana | logger=settings t=2024-03-01T23:13:57.106774505Z level=info msg=Target target=[all] 23:16:33 grafana | logger=settings t=2024-03-01T23:13:57.106780315Z level=info msg="Path Home" path=/usr/share/grafana 23:16:33 grafana | logger=settings t=2024-03-01T23:13:57.106784705Z level=info msg="Path Data" path=/var/lib/grafana 23:16:33 grafana | logger=settings t=2024-03-01T23:13:57.106787445Z level=info msg="Path Logs" path=/var/log/grafana 23:16:33 grafana | logger=settings t=2024-03-01T23:13:57.106790025Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins 23:16:33 grafana | logger=settings t=2024-03-01T23:13:57.106792745Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning 23:16:33 grafana | logger=settings t=2024-03-01T23:13:57.106796615Z level=info msg="App mode production" 23:16:33 grafana | logger=sqlstore t=2024-03-01T23:13:57.107139467Z level=info msg="Connecting to DB" dbtype=sqlite3 23:16:33 grafana | logger=sqlstore t=2024-03-01T23:13:57.107161187Z level=info msg="Creating SQLite database file" path=/var/lib/grafana/grafana.db 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.108067283Z level=info msg="Starting DB migrations" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.109098519Z level=info msg="Executing migration" id="create migration_log table" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.109979204Z level=info msg="Migration successfully executed" id="create migration_log table" duration=878.164µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.114760852Z level=info msg="Executing migration" id="create user table" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.115298725Z level=info msg="Migration successfully executed" id="create user table" duration=537.723µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.120209295Z level=info msg="Executing migration" id="add unique index user.login" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.120781268Z level=info msg="Migration successfully executed" id="add unique index user.login" duration=571.723µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.125083544Z level=info msg="Executing migration" id="add unique index user.email" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.125635297Z level=info msg="Migration successfully executed" id="add unique index user.email" duration=553.653µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.133571875Z level=info msg="Executing migration" id="drop index UQE_user_login - v1" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.134194099Z level=info msg="Migration successfully executed" id="drop index UQE_user_login - v1" duration=621.894µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.138555695Z level=info msg="Executing migration" id="drop index UQE_user_email - v1" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.139642082Z level=info msg="Migration successfully executed" id="drop index UQE_user_email - v1" duration=1.083987ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.14441901Z level=info msg="Executing migration" id="Rename table user to user_v1 - v1" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.148720296Z level=info msg="Migration successfully executed" id="Rename table user to user_v1 - v1" duration=4.297146ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.154491791Z level=info msg="Executing migration" id="create user table v2" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.155280706Z level=info msg="Migration successfully executed" id="create user table v2" duration=791.885µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.160215775Z level=info msg="Executing migration" id="create index UQE_user_login - v2" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.16098495Z level=info msg="Migration successfully executed" id="create index UQE_user_login - v2" duration=768.655µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.165366336Z level=info msg="Executing migration" id="create index UQE_user_email - v2" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.166101371Z level=info msg="Migration successfully executed" id="create index UQE_user_email - v2" duration=734.825µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.172354329Z level=info msg="Executing migration" id="copy data_source v1 to v2" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.173073053Z level=info msg="Migration successfully executed" id="copy data_source v1 to v2" duration=718.014µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.177889242Z level=info msg="Executing migration" id="Drop old table user_v1" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.178750877Z level=info msg="Migration successfully executed" id="Drop old table user_v1" duration=860.945µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.183105733Z level=info msg="Executing migration" id="Add column help_flags1 to user table" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.184887984Z level=info msg="Migration successfully executed" id="Add column help_flags1 to user table" duration=1.77699ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.188628126Z level=info msg="Executing migration" id="Update user table charset" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.188659737Z level=info msg="Migration successfully executed" id="Update user table charset" duration=32.431µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.193753457Z level=info msg="Executing migration" id="Add last_seen_at column to user" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.194877994Z level=info msg="Migration successfully executed" id="Add last_seen_at column to user" duration=1.124057ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.200457438Z level=info msg="Executing migration" id="Add missing user data" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.20092281Z level=info msg="Migration successfully executed" id="Add missing user data" duration=467.152µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.204854743Z level=info msg="Executing migration" id="Add is_disabled column to user" 23:16:33 zookeeper_1 | [2024-03-01 23:13:58,624] INFO Server environment:zookeeper.version=3.8.3-6ad6d364c7c0bcf0de452d54ebefa3058098ab56, built on 2023-10-05 10:34 UTC (org.apache.zookeeper.server.ZooKeeperServer) 23:16:33 zookeeper_1 | [2024-03-01 23:13:58,624] INFO Server environment:host.name=c7c563d3e376 (org.apache.zookeeper.server.ZooKeeperServer) 23:16:33 zookeeper_1 | [2024-03-01 23:13:58,624] INFO Server environment:java.version=11.0.21 (org.apache.zookeeper.server.ZooKeeperServer) 23:16:33 zookeeper_1 | [2024-03-01 23:13:58,624] INFO Server environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.server.ZooKeeperServer) 23:16:33 zookeeper_1 | [2024-03-01 23:13:58,624] INFO Server environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.server.ZooKeeperServer) 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.206650635Z level=info msg="Migration successfully executed" id="Add is_disabled column to user" duration=1.795122ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.210767969Z level=info msg="Executing migration" id="Add index user.login/user.email" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.211938006Z level=info msg="Migration successfully executed" id="Add index user.login/user.email" duration=1.169437ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.218546186Z level=info msg="Executing migration" id="Add is_service_account column to user" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.219750104Z level=info msg="Migration successfully executed" id="Add is_service_account column to user" duration=1.203448ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.224359091Z level=info msg="Executing migration" id="Update is_service_account column to nullable" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.233943239Z level=info msg="Migration successfully executed" id="Update is_service_account column to nullable" duration=9.583468ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.237968253Z level=info msg="Executing migration" id="create temp user table v1-7" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.238707147Z level=info msg="Migration successfully executed" id="create temp user table v1-7" duration=738.334µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.244618424Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v1-7" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.245430368Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v1-7" duration=811.274µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.250122567Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v1-7" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.251299043Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v1-7" duration=1.172916ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.256741436Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v1-7" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.257513501Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v1-7" duration=772.025µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.263619797Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v1-7" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.264869175Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v1-7" duration=1.248468ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.269777105Z level=info msg="Executing migration" id="Update temp_user table charset" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.269815315Z level=info msg="Migration successfully executed" id="Update temp_user table charset" duration=39.83µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.274797934Z level=info msg="Executing migration" id="drop index IDX_temp_user_email - v1" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.275866831Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_email - v1" duration=1.068947ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.28080241Z level=info msg="Executing migration" id="drop index IDX_temp_user_org_id - v1" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.281516345Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_org_id - v1" duration=713.335µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.286602946Z level=info msg="Executing migration" id="drop index IDX_temp_user_code - v1" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.287677892Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_code - v1" duration=1.071396ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.292444741Z level=info msg="Executing migration" id="drop index IDX_temp_user_status - v1" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.293552968Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_status - v1" duration=1.108497ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.298441457Z level=info msg="Executing migration" id="Rename table temp_user to temp_user_tmp_qwerty - v1" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.302016248Z level=info msg="Migration successfully executed" id="Rename table temp_user to temp_user_tmp_qwerty - v1" duration=3.574311ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.30560716Z level=info msg="Executing migration" id="create temp_user v2" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.306410945Z level=info msg="Migration successfully executed" id="create temp_user v2" duration=803.135µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.310148687Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v2" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.310918372Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v2" duration=769.075µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.315798932Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v2" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.316623676Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v2" duration=824.334µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.320237658Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v2" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.321120154Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v2" duration=882.215µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.361438596Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v2" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.362647313Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v2" duration=1.208397ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.367349311Z level=info msg="Executing migration" id="copy temp_user v1 to v2" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.368028106Z level=info msg="Migration successfully executed" id="copy temp_user v1 to v2" duration=678.035µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.373011265Z level=info msg="Executing migration" id="drop temp_user_tmp_qwerty" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.373894181Z level=info msg="Migration successfully executed" id="drop temp_user_tmp_qwerty" duration=882.076µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.378085196Z level=info msg="Executing migration" id="Set created for temp users that will otherwise prematurely expire" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.378712329Z level=info msg="Migration successfully executed" id="Set created for temp users that will otherwise prematurely expire" duration=626.403µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.382770414Z level=info msg="Executing migration" id="create star table" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.383396338Z level=info msg="Migration successfully executed" id="create star table" duration=623.004µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.388569439Z level=info msg="Executing migration" id="add unique index star.user_id_dashboard_id" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.389559255Z level=info msg="Migration successfully executed" id="add unique index star.user_id_dashboard_id" duration=988.276µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.39546768Z level=info msg="Executing migration" id="create org table v1" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.396636408Z level=info msg="Migration successfully executed" id="create org table v1" duration=1.168428ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.402558683Z level=info msg="Executing migration" id="create index UQE_org_name - v1" 23:16:33 zookeeper_1 | [2024-03-01 23:13:58,624] INFO Server environment:java.class.path=/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/jose4j-0.9.3.jar:/usr/bin/../share/java/kafka/commons-validator-1.7.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.100.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/commons-io-2.11.0.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/connect-mirror-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.11.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.100.Final.jar:/usr/bin/../share/java/kafka/jline-3.22.0.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-tools-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/connect-json-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.100.Final.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.5-1.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/scala-library-2.13.11.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/reflections-0.10.2.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/commons-beanutils-1.9.4.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/connect-transforms-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/connect-api-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/commons-digester-2.1.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-raft-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.8.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/checker-qual-3.19.0.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/pcollections-4.0.1.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-clients-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/jsr305-3.0.2.jar:/usr/bin/../share/java/kafka/trogdor-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-shell-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.100.Final.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.3.jar:/usr/bin/../share/java/kafka/connect-runtime-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-storage-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.100.Final.jar:/usr/bin/../share/java/kafka/commons-collections-3.2.2.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/caffeine-2.9.3.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.1.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.3.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/netty-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.53.v20231009.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.server.ZooKeeperServer) 23:16:33 zookeeper_1 | [2024-03-01 23:13:58,624] INFO Server environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.server.ZooKeeperServer) 23:16:33 zookeeper_1 | [2024-03-01 23:13:58,624] INFO Server environment:java.io.tmpdir=/tmp (org.apache.zookeeper.server.ZooKeeperServer) 23:16:33 zookeeper_1 | [2024-03-01 23:13:58,624] INFO Server environment:java.compiler= (org.apache.zookeeper.server.ZooKeeperServer) 23:16:33 zookeeper_1 | [2024-03-01 23:13:58,624] INFO Server environment:os.name=Linux (org.apache.zookeeper.server.ZooKeeperServer) 23:16:33 zookeeper_1 | [2024-03-01 23:13:58,624] INFO Server environment:os.arch=amd64 (org.apache.zookeeper.server.ZooKeeperServer) 23:16:33 zookeeper_1 | [2024-03-01 23:13:58,624] INFO Server environment:os.version=4.15.0-192-generic (org.apache.zookeeper.server.ZooKeeperServer) 23:16:33 zookeeper_1 | [2024-03-01 23:13:58,624] INFO Server environment:user.name=appuser (org.apache.zookeeper.server.ZooKeeperServer) 23:16:33 zookeeper_1 | [2024-03-01 23:13:58,624] INFO Server environment:user.home=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) 23:16:33 zookeeper_1 | [2024-03-01 23:13:58,624] INFO Server environment:user.dir=/home/appuser (org.apache.zookeeper.server.ZooKeeperServer) 23:16:33 zookeeper_1 | [2024-03-01 23:13:58,624] INFO Server environment:os.memory.free=490MB (org.apache.zookeeper.server.ZooKeeperServer) 23:16:33 zookeeper_1 | [2024-03-01 23:13:58,624] INFO Server environment:os.memory.max=512MB (org.apache.zookeeper.server.ZooKeeperServer) 23:16:33 zookeeper_1 | [2024-03-01 23:13:58,624] INFO Server environment:os.memory.total=512MB (org.apache.zookeeper.server.ZooKeeperServer) 23:16:33 zookeeper_1 | [2024-03-01 23:13:58,624] INFO zookeeper.enableEagerACLCheck = false (org.apache.zookeeper.server.ZooKeeperServer) 23:16:33 zookeeper_1 | [2024-03-01 23:13:58,624] INFO zookeeper.digest.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) 23:16:33 zookeeper_1 | [2024-03-01 23:13:58,624] INFO zookeeper.closeSessionTxn.enabled = true (org.apache.zookeeper.server.ZooKeeperServer) 23:16:33 zookeeper_1 | [2024-03-01 23:13:58,625] INFO zookeeper.flushDelay = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) 23:16:33 zookeeper_1 | [2024-03-01 23:13:58,625] INFO zookeeper.maxWriteQueuePollTime = 0 ms (org.apache.zookeeper.server.ZooKeeperServer) 23:16:33 zookeeper_1 | [2024-03-01 23:13:58,625] INFO zookeeper.maxBatchSize=1000 (org.apache.zookeeper.server.ZooKeeperServer) 23:16:33 zookeeper_1 | [2024-03-01 23:13:58,625] INFO zookeeper.intBufferStartingSizeBytes = 1024 (org.apache.zookeeper.server.ZooKeeperServer) 23:16:33 zookeeper_1 | [2024-03-01 23:13:58,625] INFO Weighed connection throttling is disabled (org.apache.zookeeper.server.BlueThrottle) 23:16:33 zookeeper_1 | [2024-03-01 23:13:58,626] INFO minSessionTimeout set to 4000 ms (org.apache.zookeeper.server.ZooKeeperServer) 23:16:33 zookeeper_1 | [2024-03-01 23:13:58,626] INFO maxSessionTimeout set to 40000 ms (org.apache.zookeeper.server.ZooKeeperServer) 23:16:33 zookeeper_1 | [2024-03-01 23:13:58,627] INFO getData response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) 23:16:33 zookeeper_1 | [2024-03-01 23:13:58,627] INFO getChildren response cache size is initialized with value 400. (org.apache.zookeeper.server.ResponseCache) 23:16:33 zookeeper_1 | [2024-03-01 23:13:58,628] INFO zookeeper.pathStats.slotCapacity = 60 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 23:16:33 zookeeper_1 | [2024-03-01 23:13:58,628] INFO zookeeper.pathStats.slotDuration = 15 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 23:16:33 zookeeper_1 | [2024-03-01 23:13:58,628] INFO zookeeper.pathStats.maxDepth = 6 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 23:16:33 zookeeper_1 | [2024-03-01 23:13:58,628] INFO zookeeper.pathStats.initialDelay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 23:16:33 zookeeper_1 | [2024-03-01 23:13:58,628] INFO zookeeper.pathStats.delay = 5 (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.403376958Z level=info msg="Migration successfully executed" id="create index UQE_org_name - v1" duration=818.035µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.407401553Z level=info msg="Executing migration" id="create org_user table v1" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.408506319Z level=info msg="Migration successfully executed" id="create org_user table v1" duration=1.104126ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.413764401Z level=info msg="Executing migration" id="create index IDX_org_user_org_id - v1" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.414986708Z level=info msg="Migration successfully executed" id="create index IDX_org_user_org_id - v1" duration=1.221987ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.418897342Z level=info msg="Executing migration" id="create index UQE_org_user_org_id_user_id - v1" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.419710047Z level=info msg="Migration successfully executed" id="create index UQE_org_user_org_id_user_id - v1" duration=812.335µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.42367412Z level=info msg="Executing migration" id="create index IDX_org_user_user_id - v1" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.424749377Z level=info msg="Migration successfully executed" id="create index IDX_org_user_user_id - v1" duration=1.073877ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.429105613Z level=info msg="Executing migration" id="Update org table charset" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.429143753Z level=info msg="Migration successfully executed" id="Update org table charset" duration=39.73µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.435495802Z level=info msg="Executing migration" id="Update org_user table charset" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.435532572Z level=info msg="Migration successfully executed" id="Update org_user table charset" duration=42.14µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.441728179Z level=info msg="Executing migration" id="Migrate all Read Only Viewers to Viewers" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.442069791Z level=info msg="Migration successfully executed" id="Migrate all Read Only Viewers to Viewers" duration=342.672µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.445556932Z level=info msg="Executing migration" id="create dashboard table" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.446255377Z level=info msg="Migration successfully executed" id="create dashboard table" duration=695.225µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.449804488Z level=info msg="Executing migration" id="add index dashboard.account_id" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.450937754Z level=info msg="Migration successfully executed" id="add index dashboard.account_id" duration=1.131526ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.456290996Z level=info msg="Executing migration" id="add unique index dashboard_account_id_slug" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.457619455Z level=info msg="Migration successfully executed" id="add unique index dashboard_account_id_slug" duration=1.327229ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.461993511Z level=info msg="Executing migration" id="create dashboard_tag table" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.462703045Z level=info msg="Migration successfully executed" id="create dashboard_tag table" duration=709.034µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.466336117Z level=info msg="Executing migration" id="add unique index dashboard_tag.dasboard_id_term" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.467176302Z level=info msg="Migration successfully executed" id="add unique index dashboard_tag.dasboard_id_term" duration=840.145µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.472182782Z level=info msg="Executing migration" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.472967667Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" duration=780.765µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.477357134Z level=info msg="Executing migration" id="Rename table dashboard to dashboard_v1 - v1" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.48671071Z level=info msg="Migration successfully executed" id="Rename table dashboard to dashboard_v1 - v1" duration=9.355316ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.490687873Z level=info msg="Executing migration" id="create dashboard v2" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.491205797Z level=info msg="Migration successfully executed" id="create dashboard v2" duration=517.554µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.496079336Z level=info msg="Executing migration" id="create index IDX_dashboard_org_id - v2" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.497354474Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_org_id - v2" duration=1.274668ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.501556039Z level=info msg="Executing migration" id="create index UQE_dashboard_org_id_slug - v2" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.502957888Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_org_id_slug - v2" duration=1.401029ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.507333284Z level=info msg="Executing migration" id="copy dashboard v1 to v2" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.507714336Z level=info msg="Migration successfully executed" id="copy dashboard v1 to v2" duration=380.212µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.513017948Z level=info msg="Executing migration" id="drop table dashboard_v1" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.514334086Z level=info msg="Migration successfully executed" id="drop table dashboard_v1" duration=1.319658ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.521029476Z level=info msg="Executing migration" id="alter dashboard.data to mediumtext v1" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.521241177Z level=info msg="Migration successfully executed" id="alter dashboard.data to mediumtext v1" duration=206.601µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.527229633Z level=info msg="Executing migration" id="Add column updated_by in dashboard - v2" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.530509274Z level=info msg="Migration successfully executed" id="Add column updated_by in dashboard - v2" duration=3.28167ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.537452775Z level=info msg="Executing migration" id="Add column created_by in dashboard - v2" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.539211586Z level=info msg="Migration successfully executed" id="Add column created_by in dashboard - v2" duration=1.755491ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.543551771Z level=info msg="Executing migration" id="Add column gnetId in dashboard" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.545393943Z level=info msg="Migration successfully executed" id="Add column gnetId in dashboard" duration=1.839502ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.549592108Z level=info msg="Executing migration" id="Add index for gnetId in dashboard" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.550432663Z level=info msg="Migration successfully executed" id="Add index for gnetId in dashboard" duration=839.965µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.555678125Z level=info msg="Executing migration" id="Add column plugin_id in dashboard" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.557539166Z level=info msg="Migration successfully executed" id="Add column plugin_id in dashboard" duration=1.862251ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.561025727Z level=info msg="Executing migration" id="Add index for plugin_id in dashboard" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.561849112Z level=info msg="Migration successfully executed" id="Add index for plugin_id in dashboard" duration=823.025µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.565505154Z level=info msg="Executing migration" id="Add index for dashboard_id in dashboard_tag" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.566230288Z level=info msg="Migration successfully executed" id="Add index for dashboard_id in dashboard_tag" duration=725.514µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.570911316Z level=info msg="Executing migration" id="Update dashboard table charset" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.570932526Z level=info msg="Migration successfully executed" id="Update dashboard table charset" duration=21.86µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.573967315Z level=info msg="Executing migration" id="Update dashboard_tag table charset" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.573988165Z level=info msg="Migration successfully executed" id="Update dashboard_tag table charset" duration=21.45µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.57817156Z level=info msg="Executing migration" id="Add column folder_id in dashboard" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.581556131Z level=info msg="Migration successfully executed" id="Add column folder_id in dashboard" duration=3.37997ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.587347965Z level=info msg="Executing migration" id="Add column isFolder in dashboard" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.589434988Z level=info msg="Migration successfully executed" id="Add column isFolder in dashboard" duration=2.081163ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.59463603Z level=info msg="Executing migration" id="Add column has_acl in dashboard" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.596788002Z level=info msg="Migration successfully executed" id="Add column has_acl in dashboard" duration=2.152072ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.600239193Z level=info msg="Executing migration" id="Add column uid in dashboard" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.601617971Z level=info msg="Migration successfully executed" id="Add column uid in dashboard" duration=1.378778ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.60485191Z level=info msg="Executing migration" id="Update uid column values in dashboard" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.605012861Z level=info msg="Migration successfully executed" id="Update uid column values in dashboard" duration=160.841µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.610054042Z level=info msg="Executing migration" id="Add unique index dashboard_org_id_uid" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.610979717Z level=info msg="Migration successfully executed" id="Add unique index dashboard_org_id_uid" duration=925.205µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.615387484Z level=info msg="Executing migration" id="Remove unique index org_id_slug" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.616178368Z level=info msg="Migration successfully executed" id="Remove unique index org_id_slug" duration=790.484µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.620058612Z level=info msg="Executing migration" id="Update dashboard title length" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.620092423Z level=info msg="Migration successfully executed" id="Update dashboard title length" duration=38.821µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.626744282Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_title_folder_id" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.627746208Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_title_folder_id" duration=999.376µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.632254395Z level=info msg="Executing migration" id="create dashboard_provisioning" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.63300918Z level=info msg="Migration successfully executed" id="create dashboard_provisioning" duration=821.925µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.636752033Z level=info msg="Executing migration" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.644086897Z level=info msg="Migration successfully executed" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" duration=7.334643ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.648796305Z level=info msg="Executing migration" id="create dashboard_provisioning v2" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.649278848Z level=info msg="Migration successfully executed" id="create dashboard_provisioning v2" duration=476.383µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.653717725Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id - v2" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.654938802Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id - v2" duration=1.220747ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.659038637Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.660341825Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" duration=1.302689ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.665226734Z level=info msg="Executing migration" id="copy dashboard_provisioning v1 to v2" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.665750437Z level=info msg="Migration successfully executed" id="copy dashboard_provisioning v1 to v2" duration=523.333µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.669387339Z level=info msg="Executing migration" id="drop dashboard_provisioning_tmp_qwerty" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.669972342Z level=info msg="Migration successfully executed" id="drop dashboard_provisioning_tmp_qwerty" duration=584.443µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.673379583Z level=info msg="Executing migration" id="Add check_sum column" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.675523575Z level=info msg="Migration successfully executed" id="Add check_sum column" duration=2.137992ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.67953155Z level=info msg="Executing migration" id="Add index for dashboard_title" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.681046139Z level=info msg="Migration successfully executed" id="Add index for dashboard_title" duration=1.514489ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.686715223Z level=info msg="Executing migration" id="delete tags for deleted dashboards" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.686942654Z level=info msg="Migration successfully executed" id="delete tags for deleted dashboards" duration=227.281µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.690455346Z level=info msg="Executing migration" id="delete stars for deleted dashboards" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.690635247Z level=info msg="Migration successfully executed" id="delete stars for deleted dashboards" duration=180.321µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.694052967Z level=info msg="Executing migration" id="Add index for dashboard_is_folder" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.695300454Z level=info msg="Migration successfully executed" id="Add index for dashboard_is_folder" duration=1.238757ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.701350031Z level=info msg="Executing migration" id="Add isPublic for dashboard" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.704061358Z level=info msg="Migration successfully executed" id="Add isPublic for dashboard" duration=2.712027ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.708022571Z level=info msg="Executing migration" id="create data_source table" 23:16:33 zookeeper_1 | [2024-03-01 23:13:58,628] INFO zookeeper.pathStats.enabled = false (org.apache.zookeeper.server.util.RequestPathMetricsCollector) 23:16:33 zookeeper_1 | [2024-03-01 23:13:58,630] INFO The max bytes for all large requests are set to 104857600 (org.apache.zookeeper.server.ZooKeeperServer) 23:16:33 zookeeper_1 | [2024-03-01 23:13:58,630] INFO The large request threshold is set to -1 (org.apache.zookeeper.server.ZooKeeperServer) 23:16:33 zookeeper_1 | [2024-03-01 23:13:58,631] INFO zookeeper.enforce.auth.enabled = false (org.apache.zookeeper.server.AuthenticationHelper) 23:16:33 zookeeper_1 | [2024-03-01 23:13:58,631] INFO zookeeper.enforce.auth.schemes = [] (org.apache.zookeeper.server.AuthenticationHelper) 23:16:33 zookeeper_1 | [2024-03-01 23:13:58,631] INFO Created server with tickTime 2000 ms minSessionTimeout 4000 ms maxSessionTimeout 40000 ms clientPortListenBacklog -1 datadir /var/lib/zookeeper/log/version-2 snapdir /var/lib/zookeeper/data/version-2 (org.apache.zookeeper.server.ZooKeeperServer) 23:16:33 zookeeper_1 | [2024-03-01 23:13:58,650] INFO Logging initialized @528ms to org.eclipse.jetty.util.log.Slf4jLog (org.eclipse.jetty.util.log) 23:16:33 zookeeper_1 | [2024-03-01 23:13:58,731] WARN o.e.j.s.ServletContextHandler@5be1d0a4{/,null,STOPPED} contextPath ends with /* (org.eclipse.jetty.server.handler.ContextHandler) 23:16:33 zookeeper_1 | [2024-03-01 23:13:58,731] WARN Empty contextPath (org.eclipse.jetty.server.handler.ContextHandler) 23:16:33 zookeeper_1 | [2024-03-01 23:13:58,749] INFO jetty-9.4.53.v20231009; built: 2023-10-09T12:29:09.265Z; git: 27bde00a0b95a1d5bbee0eae7984f891d2d0f8c9; jvm 11.0.21+9-LTS (org.eclipse.jetty.server.Server) 23:16:33 zookeeper_1 | [2024-03-01 23:13:58,774] INFO DefaultSessionIdManager workerName=node0 (org.eclipse.jetty.server.session) 23:16:33 zookeeper_1 | [2024-03-01 23:13:58,774] INFO No SessionScavenger set, using defaults (org.eclipse.jetty.server.session) 23:16:33 zookeeper_1 | [2024-03-01 23:13:58,775] INFO node0 Scavenging every 660000ms (org.eclipse.jetty.server.session) 23:16:33 zookeeper_1 | [2024-03-01 23:13:58,778] WARN ServletContext@o.e.j.s.ServletContextHandler@5be1d0a4{/,null,STARTING} has uncovered http methods for path: /* (org.eclipse.jetty.security.SecurityHandler) 23:16:33 zookeeper_1 | [2024-03-01 23:13:58,786] INFO Started o.e.j.s.ServletContextHandler@5be1d0a4{/,null,AVAILABLE} (org.eclipse.jetty.server.handler.ContextHandler) 23:16:33 zookeeper_1 | [2024-03-01 23:13:58,797] INFO Started ServerConnector@4f32a3ad{HTTP/1.1, (http/1.1)}{0.0.0.0:8080} (org.eclipse.jetty.server.AbstractConnector) 23:16:33 zookeeper_1 | [2024-03-01 23:13:58,798] INFO Started @675ms (org.eclipse.jetty.server.Server) 23:16:33 zookeeper_1 | [2024-03-01 23:13:58,798] INFO Started AdminServer on address 0.0.0.0, port 8080 and command URL /commands (org.apache.zookeeper.server.admin.JettyAdminServer) 23:16:33 zookeeper_1 | [2024-03-01 23:13:58,801] INFO Using org.apache.zookeeper.server.NIOServerCnxnFactory as server connection factory (org.apache.zookeeper.server.ServerCnxnFactory) 23:16:33 zookeeper_1 | [2024-03-01 23:13:58,802] WARN maxCnxns is not configured, using default value 0. (org.apache.zookeeper.server.ServerCnxnFactory) 23:16:33 zookeeper_1 | [2024-03-01 23:13:58,804] INFO Configuring NIO connection handler with 10s sessionless connection timeout, 2 selector thread(s), 16 worker threads, and 64 kB direct buffers. (org.apache.zookeeper.server.NIOServerCnxnFactory) 23:16:33 zookeeper_1 | [2024-03-01 23:13:58,805] INFO binding to port 0.0.0.0/0.0.0.0:2181 (org.apache.zookeeper.server.NIOServerCnxnFactory) 23:16:33 zookeeper_1 | [2024-03-01 23:13:58,818] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) 23:16:33 zookeeper_1 | [2024-03-01 23:13:58,818] INFO Using org.apache.zookeeper.server.watch.WatchManager as watch manager (org.apache.zookeeper.server.watch.WatchManagerFactory) 23:16:33 zookeeper_1 | [2024-03-01 23:13:58,820] INFO zookeeper.snapshotSizeFactor = 0.33 (org.apache.zookeeper.server.ZKDatabase) 23:16:33 zookeeper_1 | [2024-03-01 23:13:58,820] INFO zookeeper.commitLogCount=500 (org.apache.zookeeper.server.ZKDatabase) 23:16:33 zookeeper_1 | [2024-03-01 23:13:58,824] INFO zookeeper.snapshot.compression.method = CHECKED (org.apache.zookeeper.server.persistence.SnapStream) 23:16:33 zookeeper_1 | [2024-03-01 23:13:58,824] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) 23:16:33 zookeeper_1 | [2024-03-01 23:13:58,827] INFO Snapshot loaded in 8 ms, highest zxid is 0x0, digest is 1371985504 (org.apache.zookeeper.server.ZKDatabase) 23:16:33 zookeeper_1 | [2024-03-01 23:13:58,828] INFO Snapshotting: 0x0 to /var/lib/zookeeper/data/version-2/snapshot.0 (org.apache.zookeeper.server.persistence.FileTxnSnapLog) 23:16:33 zookeeper_1 | [2024-03-01 23:13:58,829] INFO Snapshot taken in 0 ms (org.apache.zookeeper.server.ZooKeeperServer) 23:16:33 zookeeper_1 | [2024-03-01 23:13:58,837] INFO zookeeper.request_throttler.shutdownTimeout = 10000 ms (org.apache.zookeeper.server.RequestThrottler) 23:16:33 zookeeper_1 | [2024-03-01 23:13:58,837] INFO PrepRequestProcessor (sid:0) started, reconfigEnabled=false (org.apache.zookeeper.server.PrepRequestProcessor) 23:16:33 zookeeper_1 | [2024-03-01 23:13:58,849] INFO Using checkIntervalMs=60000 maxPerMinute=10000 maxNeverUsedIntervalMs=0 (org.apache.zookeeper.server.ContainerManager) 23:16:33 zookeeper_1 | [2024-03-01 23:13:58,850] INFO ZooKeeper audit is disabled. (org.apache.zookeeper.audit.ZKAuditProvider) 23:16:33 zookeeper_1 | [2024-03-01 23:14:05,020] INFO Creating new log file: log.1 (org.apache.zookeeper.server.persistence.FileTxnLog) 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.708939116Z level=info msg="Migration successfully executed" id="create data_source table" duration=914.555µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.712488129Z level=info msg="Executing migration" id="add index data_source.account_id" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.713297203Z level=info msg="Migration successfully executed" id="add index data_source.account_id" duration=812.074µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.717627529Z level=info msg="Executing migration" id="add unique index data_source.account_id_name" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.718688986Z level=info msg="Migration successfully executed" id="add unique index data_source.account_id_name" duration=1.059937ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.723170272Z level=info msg="Executing migration" id="drop index IDX_data_source_account_id - v1" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.724320329Z level=info msg="Migration successfully executed" id="drop index IDX_data_source_account_id - v1" duration=1.150127ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.728813966Z level=info msg="Executing migration" id="drop index UQE_data_source_account_id_name - v1" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.729607731Z level=info msg="Migration successfully executed" id="drop index UQE_data_source_account_id_name - v1" duration=790.575µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.734642521Z level=info msg="Executing migration" id="Rename table data_source to data_source_v1 - v1" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.746427443Z level=info msg="Migration successfully executed" id="Rename table data_source to data_source_v1 - v1" duration=11.795022ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.749733382Z level=info msg="Executing migration" id="create data_source table v2" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.750345536Z level=info msg="Migration successfully executed" id="create data_source table v2" duration=611.754µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.753682186Z level=info msg="Executing migration" id="create index IDX_data_source_org_id - v2" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.754469901Z level=info msg="Migration successfully executed" id="create index IDX_data_source_org_id - v2" duration=786.855µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.7593121Z level=info msg="Executing migration" id="create index UQE_data_source_org_id_name - v2" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.760808769Z level=info msg="Migration successfully executed" id="create index UQE_data_source_org_id_name - v2" duration=1.496539ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.765032944Z level=info msg="Executing migration" id="Drop old table data_source_v1 #2" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.765760139Z level=info msg="Migration successfully executed" id="Drop old table data_source_v1 #2" duration=726.925µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.77105786Z level=info msg="Executing migration" id="Add column with_credentials" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.773620356Z level=info msg="Migration successfully executed" id="Add column with_credentials" duration=2.562036ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.777206088Z level=info msg="Executing migration" id="Add secure json data column" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.779716113Z level=info msg="Migration successfully executed" id="Add secure json data column" duration=2.507455ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.783674166Z level=info msg="Executing migration" id="Update data_source table charset" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.783702186Z level=info msg="Migration successfully executed" id="Update data_source table charset" duration=28.9µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.78758044Z level=info msg="Executing migration" id="Update initial version to 1" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.787876231Z level=info msg="Migration successfully executed" id="Update initial version to 1" duration=295.281µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.793145193Z level=info msg="Executing migration" id="Add read_only data column" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.796630355Z level=info msg="Migration successfully executed" id="Add read_only data column" duration=3.483872ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.800673018Z level=info msg="Executing migration" id="Migrate logging ds to loki ds" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.801056881Z level=info msg="Migration successfully executed" id="Migrate logging ds to loki ds" duration=387.713µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.804744134Z level=info msg="Executing migration" id="Update json_data with nulls" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.804993335Z level=info msg="Migration successfully executed" id="Update json_data with nulls" duration=248.941µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.80923752Z level=info msg="Executing migration" id="Add uid column" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.811721765Z level=info msg="Migration successfully executed" id="Add uid column" duration=2.483835ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.816819516Z level=info msg="Executing migration" id="Update uid value" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.817077547Z level=info msg="Migration successfully executed" id="Update uid value" duration=260.401µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.820772819Z level=info msg="Executing migration" id="Add unique index datasource_org_id_uid" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.821724836Z level=info msg="Migration successfully executed" id="Add unique index datasource_org_id_uid" duration=951.437µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.824978595Z level=info msg="Executing migration" id="add unique index datasource_org_id_is_default" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.825924891Z level=info msg="Migration successfully executed" id="add unique index datasource_org_id_is_default" duration=945.646µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.831071042Z level=info msg="Executing migration" id="create api_key table" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.831888596Z level=info msg="Migration successfully executed" id="create api_key table" duration=817.274µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.83576297Z level=info msg="Executing migration" id="add index api_key.account_id" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.836643055Z level=info msg="Migration successfully executed" id="add index api_key.account_id" duration=879.085µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.840681059Z level=info msg="Executing migration" id="add index api_key.key" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.841606965Z level=info msg="Migration successfully executed" id="add index api_key.key" duration=928.386µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.846769737Z level=info msg="Executing migration" id="add index api_key.account_id_name" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.847717092Z level=info msg="Migration successfully executed" id="add index api_key.account_id_name" duration=947.005µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.853450476Z level=info msg="Executing migration" id="drop index IDX_api_key_account_id - v1" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.854294871Z level=info msg="Migration successfully executed" id="drop index IDX_api_key_account_id - v1" duration=844.115µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.858223725Z level=info msg="Executing migration" id="drop index UQE_api_key_key - v1" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.859216831Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_key - v1" duration=994.766µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.864285792Z level=info msg="Executing migration" id="drop index UQE_api_key_account_id_name - v1" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.865413798Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_account_id_name - v1" duration=1.130476ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.917341531Z level=info msg="Executing migration" id="Rename table api_key to api_key_v1 - v1" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.93060987Z level=info msg="Migration successfully executed" id="Rename table api_key to api_key_v1 - v1" duration=13.268959ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.935942632Z level=info msg="Executing migration" id="create api_key table v2" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.936859789Z level=info msg="Migration successfully executed" id="create api_key table v2" duration=919.607µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.940723201Z level=info msg="Executing migration" id="create index IDX_api_key_org_id - v2" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.941918958Z level=info msg="Migration successfully executed" id="create index IDX_api_key_org_id - v2" duration=1.189507ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.945988263Z level=info msg="Executing migration" id="create index UQE_api_key_key - v2" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.94715377Z level=info msg="Migration successfully executed" id="create index UQE_api_key_key - v2" duration=1.167217ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.954003162Z level=info msg="Executing migration" id="create index UQE_api_key_org_id_name - v2" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.95551885Z level=info msg="Migration successfully executed" id="create index UQE_api_key_org_id_name - v2" duration=1.518048ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.959107262Z level=info msg="Executing migration" id="copy api_key v1 to v2" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.959625565Z level=info msg="Migration successfully executed" id="copy api_key v1 to v2" duration=519.443µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.964469365Z level=info msg="Executing migration" id="Drop old table api_key_v1" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.96533352Z level=info msg="Migration successfully executed" id="Drop old table api_key_v1" duration=865.225µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.971006154Z level=info msg="Executing migration" id="Update api_key table charset" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.971056924Z level=info msg="Migration successfully executed" id="Update api_key table charset" duration=60.75µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.977227751Z level=info msg="Executing migration" id="Add expires to api_key table" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.980607131Z level=info msg="Migration successfully executed" id="Add expires to api_key table" duration=3.39435ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.984543385Z level=info msg="Executing migration" id="Add service account foreign key" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.987350462Z level=info msg="Migration successfully executed" id="Add service account foreign key" duration=2.804427ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.9936048Z level=info msg="Executing migration" id="set service account foreign key to nil if 0" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.993856201Z level=info msg="Migration successfully executed" id="set service account foreign key to nil if 0" duration=251.222µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:57.997548463Z level=info msg="Executing migration" id="Add last_used_at to api_key table" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.000195969Z level=info msg="Migration successfully executed" id="Add last_used_at to api_key table" duration=2.647246ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.005443669Z level=info msg="Executing migration" id="Add is_revoked column to api_key table" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.007931409Z level=info msg="Migration successfully executed" id="Add is_revoked column to api_key table" duration=2.48735ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.011673488Z level=info msg="Executing migration" id="create dashboard_snapshot table v4" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.012452402Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v4" duration=776.504µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.017008243Z level=info msg="Executing migration" id="drop table dashboard_snapshot_v4 #1" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.017666237Z level=info msg="Migration successfully executed" id="drop table dashboard_snapshot_v4 #1" duration=655.674µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.021804115Z level=info msg="Executing migration" id="create dashboard_snapshot table v5 #2" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.023084041Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v5 #2" duration=1.272946ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.029784813Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_key - v5" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.030724377Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_key - v5" duration=949.754µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.035364079Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_delete_key - v5" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.036199762Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_delete_key - v5" duration=830.043µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.04000027Z level=info msg="Executing migration" id="create index IDX_dashboard_snapshot_user_id - v5" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.040897394Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_snapshot_user_id - v5" duration=896.804µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.044606821Z level=info msg="Executing migration" id="alter dashboard_snapshot to mediumtext v2" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.044677202Z level=info msg="Migration successfully executed" id="alter dashboard_snapshot to mediumtext v2" duration=64.861µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.049922136Z level=info msg="Executing migration" id="Update dashboard_snapshot table charset" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.049952056Z level=info msg="Migration successfully executed" id="Update dashboard_snapshot table charset" duration=32.5µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.05297907Z level=info msg="Executing migration" id="Add column external_delete_url to dashboard_snapshots table" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.055682392Z level=info msg="Migration successfully executed" id="Add column external_delete_url to dashboard_snapshots table" duration=2.702582ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.059741402Z level=info msg="Executing migration" id="Add encrypted dashboard json column" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.064504923Z level=info msg="Migration successfully executed" id="Add encrypted dashboard json column" duration=4.756391ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.069979108Z level=info msg="Executing migration" id="Change dashboard_encrypted column to MEDIUMBLOB" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.070199939Z level=info msg="Migration successfully executed" id="Change dashboard_encrypted column to MEDIUMBLOB" duration=224.411µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.075264922Z level=info msg="Executing migration" id="create quota table v1" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.076604788Z level=info msg="Migration successfully executed" id="create quota table v1" duration=1.339216ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.082939957Z level=info msg="Executing migration" id="create index UQE_quota_org_id_user_id_target - v1" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.085429249Z level=info msg="Migration successfully executed" id="create index UQE_quota_org_id_user_id_target - v1" duration=2.488672ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.089684727Z level=info msg="Executing migration" id="Update quota table charset" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.089742368Z level=info msg="Migration successfully executed" id="Update quota table charset" duration=58.571µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.094028927Z level=info msg="Executing migration" id="create plugin_setting table" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.09464722Z level=info msg="Migration successfully executed" id="create plugin_setting table" duration=617.903µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.100610647Z level=info msg="Executing migration" id="create index UQE_plugin_setting_org_id_plugin_id - v1" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.101954683Z level=info msg="Migration successfully executed" id="create index UQE_plugin_setting_org_id_plugin_id - v1" duration=1.344216ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.105931981Z level=info msg="Executing migration" id="Add column plugin_version to plugin_settings" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.108748694Z level=info msg="Migration successfully executed" id="Add column plugin_version to plugin_settings" duration=2.816503ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.112212529Z level=info msg="Executing migration" id="Update plugin_setting table charset" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.112233169Z level=info msg="Migration successfully executed" id="Update plugin_setting table charset" duration=21.47µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.119471683Z level=info msg="Executing migration" id="create session table" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.120245506Z level=info msg="Migration successfully executed" id="create session table" duration=773.933µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.127892481Z level=info msg="Executing migration" id="Drop old table playlist table" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.127978862Z level=info msg="Migration successfully executed" id="Drop old table playlist table" duration=87.161µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.132332331Z level=info msg="Executing migration" id="Drop old table playlist_item table" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.132408041Z level=info msg="Migration successfully executed" id="Drop old table playlist_item table" duration=80.31µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.135818297Z level=info msg="Executing migration" id="create playlist table v2" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.136323859Z level=info msg="Migration successfully executed" id="create playlist table v2" duration=505.532µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.140004596Z level=info msg="Executing migration" id="create playlist item table v2" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.140537528Z level=info msg="Migration successfully executed" id="create playlist item table v2" duration=532.762µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.145925573Z level=info msg="Executing migration" id="Update playlist table charset" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.145949553Z level=info msg="Migration successfully executed" id="Update playlist table charset" duration=24.57µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.148793406Z level=info msg="Executing migration" id="Update playlist_item table charset" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.148829846Z level=info msg="Migration successfully executed" id="Update playlist_item table charset" duration=37.92µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.164120136Z level=info msg="Executing migration" id="Add playlist column created_at" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.168847777Z level=info msg="Migration successfully executed" id="Add playlist column created_at" duration=4.727231ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.174531084Z level=info msg="Executing migration" id="Add playlist column updated_at" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.177488707Z level=info msg="Migration successfully executed" id="Add playlist column updated_at" duration=2.960413ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.181342904Z level=info msg="Executing migration" id="drop preferences table v2" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.181426185Z level=info msg="Migration successfully executed" id="drop preferences table v2" duration=81.83µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.18478603Z level=info msg="Executing migration" id="drop preferences table v3" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.18485913Z level=info msg="Migration successfully executed" id="drop preferences table v3" duration=73.71µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.188491527Z level=info msg="Executing migration" id="create preferences table v3" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.18918659Z level=info msg="Migration successfully executed" id="create preferences table v3" duration=694.573µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.201472985Z level=info msg="Executing migration" id="Update preferences table charset" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.201586346Z level=info msg="Migration successfully executed" id="Update preferences table charset" duration=115.451µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.207321932Z level=info msg="Executing migration" id="Add column team_id in preferences" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.210973369Z level=info msg="Migration successfully executed" id="Add column team_id in preferences" duration=3.650807ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.214965077Z level=info msg="Executing migration" id="Update team_id column values in preferences" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.215109398Z level=info msg="Migration successfully executed" id="Update team_id column values in preferences" duration=144.551µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.222328511Z level=info msg="Executing migration" id="Add column week_start in preferences" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.226335498Z level=info msg="Migration successfully executed" id="Add column week_start in preferences" duration=4.006497ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.231407762Z level=info msg="Executing migration" id="Add column preferences.json_data" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.23552038Z level=info msg="Migration successfully executed" id="Add column preferences.json_data" duration=4.112248ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.242445952Z level=info msg="Executing migration" id="alter preferences.json_data to mediumtext v1" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.242513582Z level=info msg="Migration successfully executed" id="alter preferences.json_data to mediumtext v1" duration=68.52µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.244732693Z level=info msg="Executing migration" id="Add preferences index org_id" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.245407635Z level=info msg="Migration successfully executed" id="Add preferences index org_id" duration=675.412µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.249408653Z level=info msg="Executing migration" id="Add preferences index user_id" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.250001316Z level=info msg="Migration successfully executed" id="Add preferences index user_id" duration=592.093µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.256013494Z level=info msg="Executing migration" id="create alert table v1" 23:16:33 policy-api | Waiting for mariadb port 3306... 23:16:33 policy-api | mariadb (172.17.0.4:3306) open 23:16:33 policy-api | Waiting for policy-db-migrator port 6824... 23:16:33 policy-api | policy-db-migrator (172.17.0.8:6824) open 23:16:33 policy-api | Policy api config file: /opt/app/policy/api/etc/apiParameters.yaml 23:16:33 policy-api | 23:16:33 policy-api | . ____ _ __ _ _ 23:16:33 policy-api | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ 23:16:33 policy-api | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ 23:16:33 policy-api | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) 23:16:33 policy-api | ' |____| .__|_| |_|_| |_\__, | / / / / 23:16:33 policy-api | =========|_|==============|___/=/_/_/_/ 23:16:33 policy-api | :: Spring Boot :: (v3.1.8) 23:16:33 policy-api | 23:16:33 policy-api | [2024-03-01T23:14:13.465+00:00|INFO|PolicyApiApplication|main] Starting PolicyApiApplication using Java 17.0.10 with PID 22 (/app/api.jar started by policy in /opt/app/policy/api/bin) 23:16:33 policy-api | [2024-03-01T23:14:13.466+00:00|INFO|PolicyApiApplication|main] No active profile set, falling back to 1 default profile: "default" 23:16:33 policy-api | [2024-03-01T23:14:15.169+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. 23:16:33 policy-api | [2024-03-01T23:14:15.270+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 92 ms. Found 6 JPA repository interfaces. 23:16:33 policy-api | [2024-03-01T23:14:15.671+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.api.main.exception.ServiceExceptionHandler 23:16:33 policy-api | [2024-03-01T23:14:15.672+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.api.main.exception.ServiceExceptionHandler 23:16:33 policy-api | [2024-03-01T23:14:16.303+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port(s): 6969 (http) 23:16:33 policy-api | [2024-03-01T23:14:16.313+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] 23:16:33 policy-api | [2024-03-01T23:14:16.315+00:00|INFO|StandardService|main] Starting service [Tomcat] 23:16:33 policy-api | [2024-03-01T23:14:16.315+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.18] 23:16:33 policy-api | [2024-03-01T23:14:16.400+00:00|INFO|[/policy/api/v1]|main] Initializing Spring embedded WebApplicationContext 23:16:33 policy-api | [2024-03-01T23:14:16.400+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 2869 ms 23:16:33 policy-api | [2024-03-01T23:14:16.789+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] 23:16:33 policy-api | [2024-03-01T23:14:16.858+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.3.0.CR1 23:16:33 policy-api | [2024-03-01T23:14:16.863+00:00|INFO|Environment|main] HHH000406: Using bytecode reflection optimizer 23:16:33 policy-api | [2024-03-01T23:14:16.906+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled 23:16:33 policy-api | [2024-03-01T23:14:17.258+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer 23:16:33 policy-api | [2024-03-01T23:14:17.277+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... 23:16:33 policy-api | [2024-03-01T23:14:17.368+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.mariadb.jdbc.Connection@2620e717 23:16:33 policy-api | [2024-03-01T23:14:17.370+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. 23:16:33 policy-api | [2024-03-01T23:14:19.179+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) 23:16:33 policy-api | [2024-03-01T23:14:19.183+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' 23:16:33 policy-api | [2024-03-01T23:14:20.224+00:00|WARN|ApiDatabaseInitializer|main] Detected multi-versioned type: policytypes/onap.policies.monitoring.tcagen2.v2.yaml 23:16:33 policy-api | [2024-03-01T23:14:21.006+00:00|INFO|ApiDatabaseInitializer|main] Multi-versioned Service Template [onap.policies.Monitoring, onap.policies.monitoring.tcagen2] 23:16:33 policy-api | [2024-03-01T23:14:22.064+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.256920148Z level=info msg="Migration successfully executed" id="create alert table v1" duration=906.173µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.264156501Z level=info msg="Executing migration" id="add index alert org_id & id " 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.265046515Z level=info msg="Migration successfully executed" id="add index alert org_id & id " duration=889.274µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.270412079Z level=info msg="Executing migration" id="add index alert state" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.271197702Z level=info msg="Migration successfully executed" id="add index alert state" duration=785.943µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.276746948Z level=info msg="Executing migration" id="add index alert dashboard_id" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.277570931Z level=info msg="Migration successfully executed" id="add index alert dashboard_id" duration=823.903µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.282642204Z level=info msg="Executing migration" id="Create alert_rule_tag table v1" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.283263937Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v1" duration=621.463µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.289385455Z level=info msg="Executing migration" id="Add unique index alert_rule_tag.alert_id_tag_id" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.290223139Z level=info msg="Migration successfully executed" id="Add unique index alert_rule_tag.alert_id_tag_id" duration=837.104µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.296467287Z level=info msg="Executing migration" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.297738873Z level=info msg="Migration successfully executed" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" duration=1.278526ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.302508815Z level=info msg="Executing migration" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.317160361Z level=info msg="Migration successfully executed" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" duration=14.652136ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.32109263Z level=info msg="Executing migration" id="Create alert_rule_tag table v2" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.321514941Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v2" duration=425.551µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.328581824Z level=info msg="Executing migration" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.329564078Z level=info msg="Migration successfully executed" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" duration=984.264µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.332706932Z level=info msg="Executing migration" id="copy alert_rule_tag v1 to v2" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.332915413Z level=info msg="Migration successfully executed" id="copy alert_rule_tag v1 to v2" duration=208.791µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.336247468Z level=info msg="Executing migration" id="drop table alert_rule_tag_v1" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.33666376Z level=info msg="Migration successfully executed" id="drop table alert_rule_tag_v1" duration=416.532µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.342426576Z level=info msg="Executing migration" id="create alert_notification table v1" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.343100709Z level=info msg="Migration successfully executed" id="create alert_notification table v1" duration=671.083µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.346108383Z level=info msg="Executing migration" id="Add column is_default" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.349602769Z level=info msg="Migration successfully executed" id="Add column is_default" duration=3.492936ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.353299735Z level=info msg="Executing migration" id="Add column frequency" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.356725322Z level=info msg="Migration successfully executed" id="Add column frequency" duration=3.425547ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.362437788Z level=info msg="Executing migration" id="Add column send_reminder" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.3676627Z level=info msg="Migration successfully executed" id="Add column send_reminder" duration=5.228042ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.371111867Z level=info msg="Executing migration" id="Add column disable_resolve_message" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.375093754Z level=info msg="Migration successfully executed" id="Add column disable_resolve_message" duration=3.981617ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.37847827Z level=info msg="Executing migration" id="add index alert_notification org_id & name" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.379319454Z level=info msg="Migration successfully executed" id="add index alert_notification org_id & name" duration=840.654µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.387448231Z level=info msg="Executing migration" id="Update alert table charset" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.387488222Z level=info msg="Migration successfully executed" id="Update alert table charset" duration=41.441µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.39165772Z level=info msg="Executing migration" id="Update alert_notification table charset" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.39170626Z level=info msg="Migration successfully executed" id="Update alert_notification table charset" duration=49.88µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.394945105Z level=info msg="Executing migration" id="create notification_journal table v1" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.395589737Z level=info msg="Migration successfully executed" id="create notification_journal table v1" duration=644.322µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.399764077Z level=info msg="Executing migration" id="add index notification_journal org_id & alert_id & notifier_id" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.40061442Z level=info msg="Migration successfully executed" id="add index notification_journal org_id & alert_id & notifier_id" duration=850.033µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.457021717Z level=info msg="Executing migration" id="drop alert_notification_journal" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.458417233Z level=info msg="Migration successfully executed" id="drop alert_notification_journal" duration=1.391236ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.462679762Z level=info msg="Executing migration" id="create alert_notification_state table v1" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.463234106Z level=info msg="Migration successfully executed" id="create alert_notification_state table v1" duration=554.304µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.469621395Z level=info msg="Executing migration" id="add index alert_notification_state org_id & alert_id & notifier_id" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.470470428Z level=info msg="Migration successfully executed" id="add index alert_notification_state org_id & alert_id & notifier_id" duration=848.893µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.47511947Z level=info msg="Executing migration" id="Add for to alert table" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.478710736Z level=info msg="Migration successfully executed" id="Add for to alert table" duration=3.590956ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.483695788Z level=info msg="Executing migration" id="Add column uid in alert_notification" 23:16:33 kafka | ===> User 23:16:33 kafka | uid=1000(appuser) gid=1000(appuser) groups=1000(appuser) 23:16:33 kafka | ===> Configuring ... 23:16:33 kafka | Running in Zookeeper mode... 23:16:33 kafka | ===> Running preflight checks ... 23:16:33 kafka | ===> Check if /var/lib/kafka/data is writable ... 23:16:33 kafka | ===> Check if Zookeeper is healthy ... 23:16:33 kafka | SLF4J: Class path contains multiple SLF4J bindings. 23:16:33 kafka | SLF4J: Found binding in [jar:file:/usr/share/java/kafka/slf4j-reload4j-1.7.36.jar!/org/slf4j/impl/StaticLoggerBinder.class] 23:16:33 kafka | SLF4J: Found binding in [jar:file:/usr/share/java/cp-base-new/slf4j-reload4j-1.7.36.jar!/org/slf4j/impl/StaticLoggerBinder.class] 23:16:33 kafka | SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. 23:16:33 kafka | SLF4J: Actual binding is of type [org.slf4j.impl.Reload4jLoggerFactory] 23:16:33 kafka | [2024-03-01 23:14:04,956] INFO Client environment:zookeeper.version=3.8.3-6ad6d364c7c0bcf0de452d54ebefa3058098ab56, built on 2023-10-05 10:34 UTC (org.apache.zookeeper.ZooKeeper) 23:16:33 kafka | [2024-03-01 23:14:04,956] INFO Client environment:host.name=336a00f9700a (org.apache.zookeeper.ZooKeeper) 23:16:33 kafka | [2024-03-01 23:14:04,956] INFO Client environment:java.version=11.0.21 (org.apache.zookeeper.ZooKeeper) 23:16:33 kafka | [2024-03-01 23:14:04,956] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) 23:16:33 kafka | [2024-03-01 23:14:04,956] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.487264054Z level=info msg="Migration successfully executed" id="Add column uid in alert_notification" duration=3.568266ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.493663503Z level=info msg="Executing migration" id="Update uid column values in alert_notification" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.493842085Z level=info msg="Migration successfully executed" id="Update uid column values in alert_notification" duration=178.892µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.504797824Z level=info msg="Executing migration" id="Add unique index alert_notification_org_id_uid" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.505400337Z level=info msg="Migration successfully executed" id="Add unique index alert_notification_org_id_uid" duration=602.183µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.511779476Z level=info msg="Executing migration" id="Remove unique index org_id_name" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.51275934Z level=info msg="Migration successfully executed" id="Remove unique index org_id_name" duration=963.514µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.527520718Z level=info msg="Executing migration" id="Add column secure_settings in alert_notification" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.531982498Z level=info msg="Migration successfully executed" id="Add column secure_settings in alert_notification" duration=4.46473ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.538786229Z level=info msg="Executing migration" id="alter alert.settings to mediumtext" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.53887123Z level=info msg="Migration successfully executed" id="alter alert.settings to mediumtext" duration=75.411µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.544579805Z level=info msg="Executing migration" id="Add non-unique index alert_notification_state_alert_id" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.546316603Z level=info msg="Migration successfully executed" id="Add non-unique index alert_notification_state_alert_id" duration=1.723298ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.551040534Z level=info msg="Executing migration" id="Add non-unique index alert_rule_tag_alert_id" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.552709252Z level=info msg="Migration successfully executed" id="Add non-unique index alert_rule_tag_alert_id" duration=1.668058ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.558871531Z level=info msg="Executing migration" id="Drop old annotation table v4" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.558944581Z level=info msg="Migration successfully executed" id="Drop old annotation table v4" duration=71.49µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.563729602Z level=info msg="Executing migration" id="create annotation table v5" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.564623757Z level=info msg="Migration successfully executed" id="create annotation table v5" duration=893.555µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.569894991Z level=info msg="Executing migration" id="add index annotation 0 v3" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.571802348Z level=info msg="Migration successfully executed" id="add index annotation 0 v3" duration=1.906878ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.576616771Z level=info msg="Executing migration" id="add index annotation 1 v3" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.577741695Z level=info msg="Migration successfully executed" id="add index annotation 1 v3" duration=1.117124ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.588761336Z level=info msg="Executing migration" id="add index annotation 2 v3" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.590528384Z level=info msg="Migration successfully executed" id="add index annotation 2 v3" duration=1.767198ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.594623612Z level=info msg="Executing migration" id="add index annotation 3 v3" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.59853588Z level=info msg="Migration successfully executed" id="add index annotation 3 v3" duration=3.910328ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.60505354Z level=info msg="Executing migration" id="add index annotation 4 v3" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.606449987Z level=info msg="Migration successfully executed" id="add index annotation 4 v3" duration=1.397617ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.612737265Z level=info msg="Executing migration" id="Update annotation table charset" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.612758935Z level=info msg="Migration successfully executed" id="Update annotation table charset" duration=22.07µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.617161066Z level=info msg="Executing migration" id="Add column region_id to annotation table" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.624246397Z level=info msg="Migration successfully executed" id="Add column region_id to annotation table" duration=7.116501ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.629024649Z level=info msg="Executing migration" id="Drop category_id index" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.629914483Z level=info msg="Migration successfully executed" id="Drop category_id index" duration=889.544µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.637430847Z level=info msg="Executing migration" id="Add column tags to annotation table" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.644026017Z level=info msg="Migration successfully executed" id="Add column tags to annotation table" duration=6.56747ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.650492816Z level=info msg="Executing migration" id="Create annotation_tag table v2" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.650984179Z level=info msg="Migration successfully executed" id="Create annotation_tag table v2" duration=491.093µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.66008878Z level=info msg="Executing migration" id="Add unique index annotation_tag.annotation_id_tag_id" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.661545606Z level=info msg="Migration successfully executed" id="Add unique index annotation_tag.annotation_id_tag_id" duration=1.456326ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.674747067Z level=info msg="Executing migration" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.67540998Z level=info msg="Migration successfully executed" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" duration=661.383µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.68200138Z level=info msg="Executing migration" id="Rename table annotation_tag to annotation_tag_v2 - v2" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.700500054Z level=info msg="Migration successfully executed" id="Rename table annotation_tag to annotation_tag_v2 - v2" duration=18.498034ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.706854143Z level=info msg="Executing migration" id="Create annotation_tag table v3" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.707587056Z level=info msg="Migration successfully executed" id="Create annotation_tag table v3" duration=726.143µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.71287177Z level=info msg="Executing migration" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.713939985Z level=info msg="Migration successfully executed" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" duration=1.067935ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.719168809Z level=info msg="Executing migration" id="copy annotation_tag v2 to v3" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.71955285Z level=info msg="Migration successfully executed" id="copy annotation_tag v2 to v3" duration=389.281µs 23:16:33 policy-api | [2024-03-01T23:14:22.251+00:00|INFO|DefaultSecurityFilterChain|main] Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@407bfc49, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@2f84848e, org.springframework.security.web.context.SecurityContextHolderFilter@4567dcbc, org.springframework.security.web.header.HeaderWriterFilter@6aca85da, org.springframework.security.web.authentication.logout.LogoutFilter@67127bb1, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@dcaa0e8, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@6e11d059, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@19bd1f98, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@607c7f58, org.springframework.security.web.access.ExceptionTranslationFilter@5ca4763f, org.springframework.security.web.access.intercept.AuthorizationFilter@ab8b1ef] 23:16:33 policy-api | [2024-03-01T23:14:23.055+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoint(s) beneath base path '' 23:16:33 policy-api | [2024-03-01T23:14:23.150+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] 23:16:33 policy-api | [2024-03-01T23:14:23.169+00:00|INFO|TomcatWebServer|main] Tomcat started on port(s): 6969 (http) with context path '/policy/api/v1' 23:16:33 policy-api | [2024-03-01T23:14:23.189+00:00|INFO|PolicyApiApplication|main] Started PolicyApiApplication in 10.492 seconds (process running for 11.113) 23:16:33 policy-api | [2024-03-01T23:14:39.916+00:00|INFO|[/policy/api/v1]|http-nio-6969-exec-2] Initializing Spring DispatcherServlet 'dispatcherServlet' 23:16:33 policy-api | [2024-03-01T23:14:39.917+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Initializing Servlet 'dispatcherServlet' 23:16:33 policy-api | [2024-03-01T23:14:39.918+00:00|INFO|DispatcherServlet|http-nio-6969-exec-2] Completed initialization in 1 ms 23:16:33 policy-api | [2024-03-01T23:14:41.712+00:00|INFO|OrderedServiceImpl|http-nio-6969-exec-3] ***** OrderedServiceImpl implementers: 23:16:33 policy-api | [] 23:16:33 policy-apex-pdp | Waiting for mariadb port 3306... 23:16:33 policy-apex-pdp | mariadb (172.17.0.4:3306) open 23:16:33 policy-apex-pdp | Waiting for kafka port 9092... 23:16:33 policy-apex-pdp | kafka (172.17.0.7:9092) open 23:16:33 policy-apex-pdp | Waiting for pap port 6969... 23:16:33 policy-apex-pdp | pap (172.17.0.10:6969) open 23:16:33 policy-apex-pdp | apexApps.sh: running application 'onappf' with command 'java -Dlogback.configurationFile=/opt/app/policy/apex-pdp/etc/logback.xml -cp /opt/app/policy/apex-pdp/etc:/opt/app/policy/apex-pdp/etc/hazelcast:/opt/app/policy/apex-pdp/etc/infinispan:/opt/app/policy/apex-pdp/lib/* -Djavax.net.ssl.keyStore=/opt/app/policy/apex-pdp/etc/ssl/policy-keystore -Djavax.net.ssl.keyStorePassword=Pol1cy_0nap -Djavax.net.ssl.trustStore=/opt/app/policy/apex-pdp/etc/ssl/policy-truststore -Djavax.net.ssl.trustStorePassword=Pol1cy_0nap -Dlogback.configurationFile=/opt/app/policy/apex-pdp/etc/logback.xml -Dhazelcast.config=/opt/app/policy/apex-pdp/etc/hazelcast.xml -Dhazelcast.mancenter.enabled=false org.onap.policy.apex.services.onappf.ApexStarterMain -c /opt/app/policy/apex-pdp/etc/onappf/config/OnapPfConfig.json' 23:16:33 policy-apex-pdp | [2024-03-01T23:14:36.990+00:00|INFO|ApexStarterMain|main] In ApexStarter with parameters [-c, /opt/app/policy/apex-pdp/etc/onappf/config/OnapPfConfig.json] 23:16:33 policy-apex-pdp | [2024-03-01T23:14:37.250+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 23:16:33 policy-apex-pdp | allow.auto.create.topics = true 23:16:33 policy-apex-pdp | auto.commit.interval.ms = 5000 23:16:33 policy-apex-pdp | auto.include.jmx.reporter = true 23:16:33 policy-apex-pdp | auto.offset.reset = latest 23:16:33 policy-apex-pdp | bootstrap.servers = [kafka:9092] 23:16:33 policy-apex-pdp | check.crcs = true 23:16:33 policy-apex-pdp | client.dns.lookup = use_all_dns_ips 23:16:33 policy-apex-pdp | client.id = consumer-d5634529-e7dd-41ae-91a6-87fa8cb77024-1 23:16:33 policy-apex-pdp | client.rack = 23:16:33 policy-apex-pdp | connections.max.idle.ms = 540000 23:16:33 policy-apex-pdp | default.api.timeout.ms = 60000 23:16:33 policy-apex-pdp | enable.auto.commit = true 23:16:33 policy-apex-pdp | exclude.internal.topics = true 23:16:33 policy-apex-pdp | fetch.max.bytes = 52428800 23:16:33 policy-apex-pdp | fetch.max.wait.ms = 500 23:16:33 policy-apex-pdp | fetch.min.bytes = 1 23:16:33 policy-apex-pdp | group.id = d5634529-e7dd-41ae-91a6-87fa8cb77024 23:16:33 policy-apex-pdp | group.instance.id = null 23:16:33 policy-apex-pdp | heartbeat.interval.ms = 3000 23:16:33 policy-apex-pdp | interceptor.classes = [] 23:16:33 policy-apex-pdp | internal.leave.group.on.close = true 23:16:33 policy-apex-pdp | internal.throw.on.fetch.stable.offset.unsupported = false 23:16:33 policy-apex-pdp | isolation.level = read_uncommitted 23:16:33 policy-apex-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:33 policy-apex-pdp | max.partition.fetch.bytes = 1048576 23:16:33 policy-apex-pdp | max.poll.interval.ms = 300000 23:16:33 policy-apex-pdp | max.poll.records = 500 23:16:33 policy-apex-pdp | metadata.max.age.ms = 300000 23:16:33 policy-apex-pdp | metric.reporters = [] 23:16:33 policy-apex-pdp | metrics.num.samples = 2 23:16:33 policy-apex-pdp | metrics.recording.level = INFO 23:16:33 policy-apex-pdp | metrics.sample.window.ms = 30000 23:16:33 policy-apex-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 23:16:33 policy-apex-pdp | receive.buffer.bytes = 65536 23:16:33 policy-apex-pdp | reconnect.backoff.max.ms = 1000 23:16:33 policy-apex-pdp | reconnect.backoff.ms = 50 23:16:33 policy-apex-pdp | request.timeout.ms = 30000 23:16:33 policy-apex-pdp | retry.backoff.ms = 100 23:16:33 policy-apex-pdp | sasl.client.callback.handler.class = null 23:16:33 policy-apex-pdp | sasl.jaas.config = null 23:16:33 policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:33 policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 23:16:33 policy-apex-pdp | sasl.kerberos.service.name = null 23:16:33 policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:33 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:33 policy-apex-pdp | sasl.login.callback.handler.class = null 23:16:33 policy-apex-pdp | sasl.login.class = null 23:16:33 policy-apex-pdp | sasl.login.connect.timeout.ms = null 23:16:33 policy-apex-pdp | sasl.login.read.timeout.ms = null 23:16:33 policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 23:16:33 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 23:16:33 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 23:16:33 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 23:16:33 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 23:16:33 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 23:16:33 policy-apex-pdp | sasl.mechanism = GSSAPI 23:16:33 policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 23:16:33 policy-apex-pdp | sasl.oauthbearer.expected.audience = null 23:16:33 policy-apex-pdp | sasl.oauthbearer.expected.issuer = null 23:16:33 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:33 kafka | [2024-03-01 23:14:04,956] INFO Client environment:java.class.path=/usr/share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/share/java/kafka/jersey-common-2.39.1.jar:/usr/share/java/kafka/swagger-annotations-2.2.8.jar:/usr/share/java/kafka/jose4j-0.9.3.jar:/usr/share/java/kafka/commons-validator-1.7.jar:/usr/share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/share/java/kafka/netty-transport-4.1.100.Final.jar:/usr/share/java/kafka/rocksdbjni-7.9.2.jar:/usr/share/java/kafka/jackson-annotations-2.13.5.jar:/usr/share/java/kafka/commons-io-2.11.0.jar:/usr/share/java/kafka/javax.activation-api-1.2.0.jar:/usr/share/java/kafka/connect-mirror-7.6.0-ccs.jar:/usr/share/java/kafka/commons-cli-1.4.jar:/usr/share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/share/java/kafka/scala-reflect-2.13.11.jar:/usr/share/java/kafka/netty-buffer-4.1.100.Final.jar:/usr/share/java/kafka/jline-3.22.0.jar:/usr/share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/share/java/kafka/kafka-server-common-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-tools-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-log4j-appender-7.6.0-ccs.jar:/usr/share/java/kafka/connect-json-7.6.0-ccs.jar:/usr/share/java/kafka/netty-resolver-4.1.100.Final.jar:/usr/share/java/kafka/hk2-api-2.6.1.jar:/usr/share/java/kafka/zstd-jni-1.5.5-1.jar:/usr/share/java/kafka/jetty-http-9.4.53.v20231009.jar:/usr/share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/share/java/kafka/kafka.jar:/usr/share/java/kafka/jetty-servlets-9.4.53.v20231009.jar:/usr/share/java/kafka/scala-library-2.13.11.jar:/usr/share/java/kafka/jakarta.inject-2.6.1.jar:/usr/share/java/kafka/jetty-continuation-9.4.53.v20231009.jar:/usr/share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/share/java/kafka/connect-basic-auth-extension-7.6.0-ccs.jar:/usr/share/java/kafka/hk2-locator-2.6.1.jar:/usr/share/java/kafka/reflections-0.10.2.jar:/usr/share/java/kafka/slf4j-api-1.7.36.jar:/usr/share/java/kafka/jetty-server-9.4.53.v20231009.jar:/usr/share/java/kafka/paranamer-2.8.jar:/usr/share/java/kafka/commons-beanutils-1.9.4.jar:/usr/share/java/kafka/jaxb-api-2.3.1.jar:/usr/share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/share/java/kafka/hk2-utils-2.6.1.jar:/usr/share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/share/java/kafka/kafka-streams-examples-7.6.0-ccs.jar:/usr/share/java/kafka/reload4j-1.2.25.jar:/usr/share/java/kafka/connect-transforms-7.6.0-ccs.jar:/usr/share/java/kafka/jackson-core-2.13.5.jar:/usr/share/java/kafka/jersey-hk2-2.39.1.jar:/usr/share/java/kafka/jackson-databind-2.13.5.jar:/usr/share/java/kafka/jersey-client-2.39.1.jar:/usr/share/java/kafka/jetty-io-9.4.53.v20231009.jar:/usr/share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/share/java/kafka/connect-api-7.6.0-ccs.jar:/usr/share/java/kafka/commons-digester-2.1.jar:/usr/share/java/kafka/netty-transport-native-epoll-4.1.100.Final.jar:/usr/share/java/kafka/argparse4j-0.7.0.jar:/usr/share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/share/java/kafka/kafka-raft-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-streams-scala_2.13-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-group-coordinator-7.6.0-ccs.jar:/usr/share/java/kafka/audience-annotations-0.12.0.jar:/usr/share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/share/java/kafka/kafka-storage-api-7.6.0-ccs.jar:/usr/share/java/kafka/maven-artifact-3.8.8.jar:/usr/share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/share/java/kafka/kafka-metadata-7.6.0-ccs.jar:/usr/share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/share/java/kafka/jersey-server-2.39.1.jar:/usr/share/java/kafka/commons-lang3-3.8.1.jar:/usr/share/java/kafka/kafka-streams-test-utils-7.6.0-ccs.jar:/usr/share/java/kafka/jopt-simple-5.0.4.jar:/usr/share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/share/java/kafka/lz4-java-1.8.0.jar:/usr/share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/share/java/kafka/checker-qual-3.19.0.jar:/usr/share/java/kafka/connect-mirror-client-7.6.0-ccs.jar:/usr/share/java/kafka/pcollections-4.0.1.jar:/usr/share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/share/java/kafka/jetty-client-9.4.53.v20231009.jar:/usr/share/java/kafka/jetty-util-9.4.53.v20231009.jar:/usr/share/java/kafka/kafka-tools-api-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-streams-7.6.0-ccs.jar:/usr/share/java/kafka/kafka_2.13-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-clients-7.6.0-ccs.jar:/usr/share/java/kafka/commons-logging-1.2.jar:/usr/share/java/kafka/jsr305-3.0.2.jar:/usr/share/java/kafka/trogdor-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-shell-7.6.0-ccs.jar:/usr/share/java/kafka/netty-codec-4.1.100.Final.jar:/usr/share/java/kafka/netty-transport-native-unix-common-4.1.100.Final.jar:/usr/share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/share/java/kafka/metrics-core-2.2.0.jar:/usr/share/java/kafka/zookeeper-jute-3.8.3.jar:/usr/share/java/kafka/connect-runtime-7.6.0-ccs.jar:/usr/share/java/kafka/kafka-storage-7.6.0-ccs.jar:/usr/share/java/kafka/netty-handler-4.1.100.Final.jar:/usr/share/java/kafka/commons-collections-3.2.2.jar:/usr/share/java/kafka/javassist-3.29.2-GA.jar:/usr/share/java/kafka/caffeine-2.9.3.jar:/usr/share/java/kafka/plexus-utils-3.3.1.jar:/usr/share/java/kafka/zookeeper-3.8.3.jar:/usr/share/java/kafka/activation-1.1.1.jar:/usr/share/java/kafka/netty-common-4.1.100.Final.jar:/usr/share/java/kafka/metrics-core-4.1.12.1.jar:/usr/share/java/kafka/jetty-servlet-9.4.53.v20231009.jar:/usr/share/java/kafka/jetty-util-ajax-9.4.53.v20231009.jar:/usr/share/java/kafka/snappy-java-1.1.10.5.jar:/usr/share/java/kafka/netty-transport-classes-epoll-4.1.100.Final.jar:/usr/share/java/kafka/jetty-security-9.4.53.v20231009.jar:/usr/share/java/cp-base-new/scala-logging_2.13-3.9.4.jar:/usr/share/java/cp-base-new/jackson-datatype-jdk8-2.14.2.jar:/usr/share/java/cp-base-new/jose4j-0.9.3.jar:/usr/share/java/cp-base-new/commons-validator-1.7.jar:/usr/share/java/cp-base-new/logredactor-1.0.12.jar:/usr/share/java/cp-base-new/jolokia-core-1.7.1.jar:/usr/share/java/cp-base-new/re2j-1.6.jar:/usr/share/java/cp-base-new/scala-library-2.13.10.jar:/usr/share/java/cp-base-new/commons-io-2.11.0.jar:/usr/share/java/cp-base-new/commons-cli-1.4.jar:/usr/share/java/cp-base-new/slf4j-reload4j-1.7.36.jar:/usr/share/java/cp-base-new/jackson-annotations-2.14.2.jar:/usr/share/java/cp-base-new/json-simple-1.1.1.jar:/usr/share/java/cp-base-new/jackson-module-scala_2.13-2.14.2.jar:/usr/share/java/cp-base-new/scala-java8-compat_2.13-1.0.2.jar:/usr/share/java/cp-base-new/kafka-server-common-7.6.0-ccs.jar:/usr/share/java/cp-base-new/minimal-json-0.9.5.jar:/usr/share/java/cp-base-new/zstd-jni-1.5.5-1.jar:/usr/share/java/cp-base-new/jackson-dataformat-yaml-2.14.2.jar:/usr/share/java/cp-base-new/slf4j-api-1.7.36.jar:/usr/share/java/cp-base-new/paranamer-2.8.jar:/usr/share/java/cp-base-new/commons-beanutils-1.9.4.jar:/usr/share/java/cp-base-new/jmx_prometheus_javaagent-0.18.0.jar:/usr/share/java/cp-base-new/reload4j-1.2.25.jar:/usr/share/java/cp-base-new/jackson-core-2.14.2.jar:/usr/share/java/cp-base-new/common-utils-7.6.0.jar:/usr/share/java/cp-base-new/commons-digester-2.1.jar:/usr/share/java/cp-base-new/argparse4j-0.7.0.jar:/usr/share/java/cp-base-new/kafka-raft-7.6.0-ccs.jar:/usr/share/java/cp-base-new/kafka-group-coordinator-7.6.0-ccs.jar:/usr/share/java/cp-base-new/audience-annotations-0.12.0.jar:/usr/share/java/cp-base-new/gson-2.9.0.jar:/usr/share/java/cp-base-new/snakeyaml-2.0.jar:/usr/share/java/cp-base-new/kafka-storage-api-7.6.0-ccs.jar:/usr/share/java/cp-base-new/kafka-metadata-7.6.0-ccs.jar:/usr/share/java/cp-base-new/disk-usage-agent-7.6.0.jar:/usr/share/java/cp-base-new/jopt-simple-5.0.4.jar:/usr/share/java/cp-base-new/error_prone_annotations-2.10.0.jar:/usr/share/java/cp-base-new/lz4-java-1.8.0.jar:/usr/share/java/cp-base-new/logredactor-metrics-1.0.12.jar:/usr/share/java/cp-base-new/checker-qual-3.19.0.jar:/usr/share/java/cp-base-new/pcollections-4.0.1.jar:/usr/share/java/cp-base-new/kafka-tools-api-7.6.0-ccs.jar:/usr/share/java/cp-base-new/kafka_2.13-7.6.0-ccs.jar:/usr/share/java/cp-base-new/kafka-clients-7.6.0-ccs.jar:/usr/share/java/cp-base-new/commons-logging-1.2.jar:/usr/share/java/cp-base-new/scala-reflect-2.13.10.jar:/usr/share/java/cp-base-new/scala-collection-compat_2.13-2.10.0.jar:/usr/share/java/cp-base-new/metrics-core-2.2.0.jar:/usr/share/java/cp-base-new/zookeeper-jute-3.8.3.jar:/usr/share/java/cp-base-new/jackson-dataformat-csv-2.14.2.jar:/usr/share/java/cp-base-new/utility-belt-7.6.0.jar:/usr/share/java/cp-base-new/kafka-storage-7.6.0-ccs.jar:/usr/share/java/cp-base-new/commons-collections-3.2.2.jar:/usr/share/java/cp-base-new/caffeine-2.9.3.jar:/usr/share/java/cp-base-new/jolokia-jvm-1.7.1.jar:/usr/share/java/cp-base-new/zookeeper-3.8.3.jar:/usr/share/java/cp-base-new/metrics-core-4.1.12.1.jar:/usr/share/java/cp-base-new/jackson-databind-2.14.2.jar:/usr/share/java/cp-base-new/snappy-java-1.1.10.5.jar (org.apache.zookeeper.ZooKeeper) 23:16:33 kafka | [2024-03-01 23:14:04,957] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) 23:16:33 kafka | [2024-03-01 23:14:04,957] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) 23:16:33 kafka | [2024-03-01 23:14:04,957] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) 23:16:33 kafka | [2024-03-01 23:14:04,957] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) 23:16:33 kafka | [2024-03-01 23:14:04,957] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) 23:16:33 kafka | [2024-03-01 23:14:04,957] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) 23:16:33 kafka | [2024-03-01 23:14:04,957] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) 23:16:33 kafka | [2024-03-01 23:14:04,957] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) 23:16:33 kafka | [2024-03-01 23:14:04,957] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) 23:16:33 kafka | [2024-03-01 23:14:04,957] INFO Client environment:os.memory.free=487MB (org.apache.zookeeper.ZooKeeper) 23:16:33 kafka | [2024-03-01 23:14:04,957] INFO Client environment:os.memory.max=8042MB (org.apache.zookeeper.ZooKeeper) 23:16:33 kafka | [2024-03-01 23:14:04,957] INFO Client environment:os.memory.total=504MB (org.apache.zookeeper.ZooKeeper) 23:16:33 kafka | [2024-03-01 23:14:04,960] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=40000 watcher=io.confluent.admin.utils.ZookeeperConnectionWatcher@184cf7cf (org.apache.zookeeper.ZooKeeper) 23:16:33 kafka | [2024-03-01 23:14:04,964] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) 23:16:33 kafka | [2024-03-01 23:14:04,968] INFO jute.maxbuffer value is 1048575 Bytes (org.apache.zookeeper.ClientCnxnSocket) 23:16:33 kafka | [2024-03-01 23:14:04,975] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) 23:16:33 kafka | [2024-03-01 23:14:04,997] INFO Opening socket connection to server zookeeper/172.17.0.5:2181. (org.apache.zookeeper.ClientCnxn) 23:16:33 kafka | [2024-03-01 23:14:04,998] INFO SASL config status: Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn) 23:16:33 kafka | [2024-03-01 23:14:05,004] INFO Socket connection established, initiating session, client: /172.17.0.7:38812, server: zookeeper/172.17.0.5:2181 (org.apache.zookeeper.ClientCnxn) 23:16:33 kafka | [2024-03-01 23:14:05,038] INFO Session establishment complete on server zookeeper/172.17.0.5:2181, session id = 0x100000371220000, negotiated timeout = 40000 (org.apache.zookeeper.ClientCnxn) 23:16:33 kafka | [2024-03-01 23:14:05,158] INFO Session: 0x100000371220000 closed (org.apache.zookeeper.ZooKeeper) 23:16:33 kafka | [2024-03-01 23:14:05,158] INFO EventThread shut down for session: 0x100000371220000 (org.apache.zookeeper.ClientCnxn) 23:16:33 kafka | Using log4j config /etc/kafka/log4j.properties 23:16:33 kafka | ===> Launching ... 23:16:33 kafka | ===> Launching kafka ... 23:16:33 kafka | [2024-03-01 23:14:05,795] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$) 23:16:33 kafka | [2024-03-01 23:14:06,104] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) 23:16:33 kafka | [2024-03-01 23:14:06,176] INFO Registered signal handlers for TERM, INT, HUP (org.apache.kafka.common.utils.LoggingSignalHandler) 23:16:33 kafka | [2024-03-01 23:14:06,177] INFO starting (kafka.server.KafkaServer) 23:16:33 kafka | [2024-03-01 23:14:06,177] INFO Connecting to zookeeper on zookeeper:2181 (kafka.server.KafkaServer) 23:16:33 kafka | [2024-03-01 23:14:06,190] INFO [ZooKeeperClient Kafka server] Initializing a new session to zookeeper:2181. (kafka.zookeeper.ZooKeeperClient) 23:16:33 kafka | [2024-03-01 23:14:06,193] INFO Client environment:zookeeper.version=3.8.3-6ad6d364c7c0bcf0de452d54ebefa3058098ab56, built on 2023-10-05 10:34 UTC (org.apache.zookeeper.ZooKeeper) 23:16:33 kafka | [2024-03-01 23:14:06,193] INFO Client environment:host.name=336a00f9700a (org.apache.zookeeper.ZooKeeper) 23:16:33 kafka | [2024-03-01 23:14:06,193] INFO Client environment:java.version=11.0.21 (org.apache.zookeeper.ZooKeeper) 23:16:33 kafka | [2024-03-01 23:14:06,194] INFO Client environment:java.vendor=Azul Systems, Inc. (org.apache.zookeeper.ZooKeeper) 23:16:33 kafka | [2024-03-01 23:14:06,194] INFO Client environment:java.home=/usr/lib/jvm/java-11-zulu-openjdk-ca (org.apache.zookeeper.ZooKeeper) 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.728992803Z level=info msg="Executing migration" id="drop table annotation_tag_v2" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.729886958Z level=info msg="Migration successfully executed" id="drop table annotation_tag_v2" duration=898.415µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.736682218Z level=info msg="Executing migration" id="Update alert annotations and set TEXT to empty" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.7371413Z level=info msg="Migration successfully executed" id="Update alert annotations and set TEXT to empty" duration=458.412µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.743086577Z level=info msg="Executing migration" id="Add created time to annotation table" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.749822848Z level=info msg="Migration successfully executed" id="Add created time to annotation table" duration=6.735541ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.759773783Z level=info msg="Executing migration" id="Add updated time to annotation table" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.766075922Z level=info msg="Migration successfully executed" id="Add updated time to annotation table" duration=6.300039ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.77224156Z level=info msg="Executing migration" id="Add index for created in annotation table" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.773165644Z level=info msg="Migration successfully executed" id="Add index for created in annotation table" duration=923.844µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.780405927Z level=info msg="Executing migration" id="Add index for updated in annotation table" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.781851973Z level=info msg="Migration successfully executed" id="Add index for updated in annotation table" duration=1.441016ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.791010766Z level=info msg="Executing migration" id="Convert existing annotations from seconds to milliseconds" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.791433028Z level=info msg="Migration successfully executed" id="Convert existing annotations from seconds to milliseconds" duration=421.532µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.796232279Z level=info msg="Executing migration" id="Add epoch_end column" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.80297059Z level=info msg="Migration successfully executed" id="Add epoch_end column" duration=6.737821ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.809449149Z level=info msg="Executing migration" id="Add index for epoch_end" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.810416424Z level=info msg="Migration successfully executed" id="Add index for epoch_end" duration=967.185µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.81400774Z level=info msg="Executing migration" id="Make epoch_end the same as epoch" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.814411762Z level=info msg="Migration successfully executed" id="Make epoch_end the same as epoch" duration=403.232µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.821572994Z level=info msg="Executing migration" id="Move region to single row" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.822314418Z level=info msg="Migration successfully executed" id="Move region to single row" duration=741.714µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.826108175Z level=info msg="Executing migration" id="Remove index org_id_epoch from annotation table" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.827465461Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch from annotation table" duration=1.357176ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.832459853Z level=info msg="Executing migration" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.834444713Z level=info msg="Migration successfully executed" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" duration=1.98487ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.843642134Z level=info msg="Executing migration" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.845216702Z level=info msg="Migration successfully executed" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" duration=1.574208ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.850144395Z level=info msg="Executing migration" id="Add index for org_id_epoch_end_epoch on annotation table" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.851730281Z level=info msg="Migration successfully executed" id="Add index for org_id_epoch_end_epoch on annotation table" duration=1.586006ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.85567736Z level=info msg="Executing migration" id="Remove index org_id_epoch_epoch_end from annotation table" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.856930035Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch_epoch_end from annotation table" duration=1.253825ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.862110308Z level=info msg="Executing migration" id="Add index for alert_id on annotation table" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.863276633Z level=info msg="Migration successfully executed" id="Add index for alert_id on annotation table" duration=1.165575ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.868282816Z level=info msg="Executing migration" id="Increase tags column to length 4096" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.868512277Z level=info msg="Migration successfully executed" id="Increase tags column to length 4096" duration=229.191µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.874412174Z level=info msg="Executing migration" id="create test_data table" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.875824901Z level=info msg="Migration successfully executed" id="create test_data table" duration=1.413347ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.883508306Z level=info msg="Executing migration" id="create dashboard_version table v1" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.885076223Z level=info msg="Migration successfully executed" id="create dashboard_version table v1" duration=1.566977ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.892882508Z level=info msg="Executing migration" id="add index dashboard_version.dashboard_id" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.893887653Z level=info msg="Migration successfully executed" id="add index dashboard_version.dashboard_id" duration=1.005075ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.91099015Z level=info msg="Executing migration" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.912645798Z level=info msg="Migration successfully executed" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" duration=1.655258ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.919487149Z level=info msg="Executing migration" id="Set dashboard version to 1 where 0" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.919711591Z level=info msg="Migration successfully executed" id="Set dashboard version to 1 where 0" duration=224.652µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.92616941Z level=info msg="Executing migration" id="save existing dashboard data in dashboard_version table v1" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.926534721Z level=info msg="Migration successfully executed" id="save existing dashboard data in dashboard_version table v1" duration=365.111µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.935436482Z level=info msg="Executing migration" id="alter dashboard_version.data to mediumtext v1" 23:16:33 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:33 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:33 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null 23:16:33 policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope 23:16:33 policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub 23:16:33 policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null 23:16:33 policy-apex-pdp | security.protocol = PLAINTEXT 23:16:33 policy-apex-pdp | security.providers = null 23:16:33 policy-apex-pdp | send.buffer.bytes = 131072 23:16:33 policy-apex-pdp | session.timeout.ms = 45000 23:16:33 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 23:16:33 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 23:16:33 policy-apex-pdp | ssl.cipher.suites = null 23:16:33 policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:33 policy-apex-pdp | ssl.endpoint.identification.algorithm = https 23:16:33 policy-apex-pdp | ssl.engine.factory.class = null 23:16:33 policy-apex-pdp | ssl.key.password = null 23:16:33 policy-apex-pdp | ssl.keymanager.algorithm = SunX509 23:16:33 policy-apex-pdp | ssl.keystore.certificate.chain = null 23:16:33 policy-apex-pdp | ssl.keystore.key = null 23:16:33 policy-apex-pdp | ssl.keystore.location = null 23:16:33 policy-apex-pdp | ssl.keystore.password = null 23:16:33 policy-apex-pdp | ssl.keystore.type = JKS 23:16:33 policy-apex-pdp | ssl.protocol = TLSv1.3 23:16:33 policy-apex-pdp | ssl.provider = null 23:16:33 policy-apex-pdp | ssl.secure.random.implementation = null 23:16:33 policy-apex-pdp | ssl.trustmanager.algorithm = PKIX 23:16:33 policy-apex-pdp | ssl.truststore.certificates = null 23:16:33 policy-apex-pdp | ssl.truststore.location = null 23:16:33 policy-apex-pdp | ssl.truststore.password = null 23:16:33 policy-apex-pdp | ssl.truststore.type = JKS 23:16:33 policy-apex-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:33 policy-apex-pdp | 23:16:33 policy-apex-pdp | [2024-03-01T23:14:37.395+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 23:16:33 policy-apex-pdp | [2024-03-01T23:14:37.395+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 23:16:33 policy-apex-pdp | [2024-03-01T23:14:37.395+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1709334877393 23:16:33 policy-apex-pdp | [2024-03-01T23:14:37.397+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-d5634529-e7dd-41ae-91a6-87fa8cb77024-1, groupId=d5634529-e7dd-41ae-91a6-87fa8cb77024] Subscribed to topic(s): policy-pdp-pap 23:16:33 policy-apex-pdp | [2024-03-01T23:14:37.409+00:00|INFO|ServiceManager|main] service manager starting 23:16:33 policy-apex-pdp | [2024-03-01T23:14:37.410+00:00|INFO|ServiceManager|main] service manager starting topics 23:16:33 policy-apex-pdp | [2024-03-01T23:14:37.413+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=d5634529-e7dd-41ae-91a6-87fa8cb77024, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: starting 23:16:33 policy-apex-pdp | [2024-03-01T23:14:37.433+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 23:16:33 policy-apex-pdp | allow.auto.create.topics = true 23:16:33 policy-apex-pdp | auto.commit.interval.ms = 5000 23:16:33 policy-apex-pdp | auto.include.jmx.reporter = true 23:16:33 policy-apex-pdp | auto.offset.reset = latest 23:16:33 policy-apex-pdp | bootstrap.servers = [kafka:9092] 23:16:33 policy-apex-pdp | check.crcs = true 23:16:33 policy-apex-pdp | client.dns.lookup = use_all_dns_ips 23:16:33 policy-apex-pdp | client.id = consumer-d5634529-e7dd-41ae-91a6-87fa8cb77024-2 23:16:33 policy-apex-pdp | client.rack = 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.935580432Z level=info msg="Migration successfully executed" id="alter dashboard_version.data to mediumtext v1" duration=137.55µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.941532959Z level=info msg="Executing migration" id="create team table" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.942597655Z level=info msg="Migration successfully executed" id="create team table" duration=1.064096ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.951206634Z level=info msg="Executing migration" id="add index team.org_id" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.952881651Z level=info msg="Migration successfully executed" id="add index team.org_id" duration=1.678797ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.992891553Z level=info msg="Executing migration" id="add unique index team_org_id_name" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:58.995127443Z level=info msg="Migration successfully executed" id="add unique index team_org_id_name" duration=2.23855ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.002083825Z level=info msg="Executing migration" id="Add column uid in team" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.007203623Z level=info msg="Migration successfully executed" id="Add column uid in team" duration=5.119438ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.015771339Z level=info msg="Executing migration" id="Update uid column values in team" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.01607073Z level=info msg="Migration successfully executed" id="Update uid column values in team" duration=300.051µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.02608516Z level=info msg="Executing migration" id="Add unique index team_org_id_uid" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.027770005Z level=info msg="Migration successfully executed" id="Add unique index team_org_id_uid" duration=1.686926ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.035620658Z level=info msg="Executing migration" id="create team member table" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.03630232Z level=info msg="Migration successfully executed" id="create team member table" duration=682.392µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.042083947Z level=info msg="Executing migration" id="add index team_member.org_id" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.043458342Z level=info msg="Migration successfully executed" id="add index team_member.org_id" duration=1.374115ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.048685248Z level=info msg="Executing migration" id="add unique index team_member_org_id_team_id_user_id" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.04976712Z level=info msg="Migration successfully executed" id="add unique index team_member_org_id_team_id_user_id" duration=1.078682ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.055764308Z level=info msg="Executing migration" id="add index team_member.team_id" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.056855411Z level=info msg="Migration successfully executed" id="add index team_member.team_id" duration=1.090923ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.064751415Z level=info msg="Executing migration" id="Add column email to team table" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.068492637Z level=info msg="Migration successfully executed" id="Add column email to team table" duration=3.741562ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.074874215Z level=info msg="Executing migration" id="Add column external to team_member table" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.081484484Z level=info msg="Migration successfully executed" id="Add column external to team_member table" duration=6.609889ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.103396248Z level=info msg="Executing migration" id="Add column permission to team_member table" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.110469679Z level=info msg="Migration successfully executed" id="Add column permission to team_member table" duration=7.077111ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.115601063Z level=info msg="Executing migration" id="create dashboard acl table" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.116637817Z level=info msg="Migration successfully executed" id="create dashboard acl table" duration=1.036164ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.122408353Z level=info msg="Executing migration" id="add index dashboard_acl_dashboard_id" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.123445236Z level=info msg="Migration successfully executed" id="add index dashboard_acl_dashboard_id" duration=1.036583ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.175485467Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_user_id" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.177289392Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_user_id" duration=1.803615ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.182819538Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_team_id" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.183891252Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_team_id" duration=1.071374ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.19364476Z level=info msg="Executing migration" id="add index dashboard_acl_user_id" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.195271185Z level=info msg="Migration successfully executed" id="add index dashboard_acl_user_id" duration=1.625875ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.20407946Z level=info msg="Executing migration" id="add index dashboard_acl_team_id" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.205103983Z level=info msg="Migration successfully executed" id="add index dashboard_acl_team_id" duration=1.018613ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.213565757Z level=info msg="Executing migration" id="add index dashboard_acl_org_id_role" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.215211703Z level=info msg="Migration successfully executed" id="add index dashboard_acl_org_id_role" duration=1.644916ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.223104895Z level=info msg="Executing migration" id="add index dashboard_permission" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.224679189Z level=info msg="Migration successfully executed" id="add index dashboard_permission" duration=1.573544ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.236165453Z level=info msg="Executing migration" id="save default acl rules in dashboard_acl table" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.236671495Z level=info msg="Migration successfully executed" id="save default acl rules in dashboard_acl table" duration=506.352µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.244085876Z level=info msg="Executing migration" id="delete acl rules for deleted dashboards and folders" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.244460197Z level=info msg="Migration successfully executed" id="delete acl rules for deleted dashboards and folders" duration=373.511µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.251764569Z level=info msg="Executing migration" id="create tag table" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.252816701Z level=info msg="Migration successfully executed" id="create tag table" duration=1.051562ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.260337673Z level=info msg="Executing migration" id="add index tag.key_value" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.261792718Z level=info msg="Migration successfully executed" id="add index tag.key_value" duration=1.454615ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.271878906Z level=info msg="Executing migration" id="create login attempt table" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.272597288Z level=info msg="Migration successfully executed" id="create login attempt table" duration=718.242µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.286021448Z level=info msg="Executing migration" id="add index login_attempt.username" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.287467362Z level=info msg="Migration successfully executed" id="add index login_attempt.username" duration=1.445214ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.294303032Z level=info msg="Executing migration" id="drop index IDX_login_attempt_username - v1" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.295737456Z level=info msg="Migration successfully executed" id="drop index IDX_login_attempt_username - v1" duration=1.434864ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.301137612Z level=info msg="Executing migration" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.318274351Z level=info msg="Migration successfully executed" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" duration=17.140789ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.323054986Z level=info msg="Executing migration" id="create login_attempt v2" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.323557647Z level=info msg="Migration successfully executed" id="create login_attempt v2" duration=505.641µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.329948085Z level=info msg="Executing migration" id="create index IDX_login_attempt_username - v2" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.330876088Z level=info msg="Migration successfully executed" id="create index IDX_login_attempt_username - v2" duration=927.863µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.338138479Z level=info msg="Executing migration" id="copy login_attempt v1 to v2" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.33860589Z level=info msg="Migration successfully executed" id="copy login_attempt v1 to v2" duration=467.921µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.345842942Z level=info msg="Executing migration" id="drop login_attempt_tmp_qwerty" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.347012885Z level=info msg="Migration successfully executed" id="drop login_attempt_tmp_qwerty" duration=1.173243ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.351834839Z level=info msg="Executing migration" id="create user auth table" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.352594331Z level=info msg="Migration successfully executed" id="create user auth table" duration=759.222µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.357930506Z level=info msg="Executing migration" id="create index IDX_user_auth_auth_module_auth_id - v1" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.358847539Z level=info msg="Migration successfully executed" id="create index IDX_user_auth_auth_module_auth_id - v1" duration=911.433µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.36598767Z level=info msg="Executing migration" id="alter user_auth.auth_id to length 190" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.36611713Z level=info msg="Migration successfully executed" id="alter user_auth.auth_id to length 190" duration=124.13µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.371521976Z level=info msg="Executing migration" id="Add OAuth access token to user_auth" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.377829424Z level=info msg="Migration successfully executed" id="Add OAuth access token to user_auth" duration=6.308258ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.383754571Z level=info msg="Executing migration" id="Add OAuth refresh token to user_auth" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.392317806Z level=info msg="Migration successfully executed" id="Add OAuth refresh token to user_auth" duration=8.558985ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.403772649Z level=info msg="Executing migration" id="Add OAuth token type to user_auth" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.411979543Z level=info msg="Migration successfully executed" id="Add OAuth token type to user_auth" duration=8.204954ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.417305958Z level=info msg="Executing migration" id="Add OAuth expiry to user_auth" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.42448337Z level=info msg="Migration successfully executed" id="Add OAuth expiry to user_auth" duration=7.176752ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.428963892Z level=info msg="Executing migration" id="Add index to user_id column in user_auth" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.430299726Z level=info msg="Migration successfully executed" id="Add index to user_id column in user_auth" duration=1.335494ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.437355216Z level=info msg="Executing migration" id="Add OAuth ID token to user_auth" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.446835715Z level=info msg="Migration successfully executed" id="Add OAuth ID token to user_auth" duration=9.479539ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.453888764Z level=info msg="Executing migration" id="create server_lock table" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.454848748Z level=info msg="Migration successfully executed" id="create server_lock table" duration=959.874µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.460622845Z level=info msg="Executing migration" id="add index server_lock.operation_uid" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.461608887Z level=info msg="Migration successfully executed" id="add index server_lock.operation_uid" duration=985.872µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.496036457Z level=info msg="Executing migration" id="create user auth token table" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.497599522Z level=info msg="Migration successfully executed" id="create user auth token table" duration=1.564625ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.505875376Z level=info msg="Executing migration" id="add unique index user_auth_token.auth_token" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.507143209Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.auth_token" duration=1.267503ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.513886579Z level=info msg="Executing migration" id="add unique index user_auth_token.prev_auth_token" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.515330743Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.prev_auth_token" duration=1.443744ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.522430804Z level=info msg="Executing migration" id="add index user_auth_token.user_id" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.524880501Z level=info msg="Migration successfully executed" id="add index user_auth_token.user_id" duration=2.449097ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.531234049Z level=info msg="Executing migration" id="Add revoked_at to the user auth token" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.53863536Z level=info msg="Migration successfully executed" id="Add revoked_at to the user auth token" duration=7.400721ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.547141955Z level=info msg="Executing migration" id="add index user_auth_token.revoked_at" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.54893869Z level=info msg="Migration successfully executed" id="add index user_auth_token.revoked_at" duration=1.795835ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.554685347Z level=info msg="Executing migration" id="create cache_data table" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.55556529Z level=info msg="Migration successfully executed" id="create cache_data table" duration=881.093µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.563711843Z level=info msg="Executing migration" id="add unique index cache_data.cache_key" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.565518578Z level=info msg="Migration successfully executed" id="add unique index cache_data.cache_key" duration=1.812235ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.571319845Z level=info msg="Executing migration" id="create short_url table v1" 23:16:33 kafka | [2024-03-01 23:14:06,194] INFO Client environment:java.class.path=/usr/bin/../share/java/kafka/scala-logging_2.13-3.9.4.jar:/usr/bin/../share/java/kafka/jersey-common-2.39.1.jar:/usr/bin/../share/java/kafka/swagger-annotations-2.2.8.jar:/usr/bin/../share/java/kafka/jose4j-0.9.3.jar:/usr/bin/../share/java/kafka/commons-validator-1.7.jar:/usr/bin/../share/java/kafka/javax.servlet-api-3.1.0.jar:/usr/bin/../share/java/kafka/aopalliance-repackaged-2.6.1.jar:/usr/bin/../share/java/kafka/netty-transport-4.1.100.Final.jar:/usr/bin/../share/java/kafka/rocksdbjni-7.9.2.jar:/usr/bin/../share/java/kafka/jackson-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/commons-io-2.11.0.jar:/usr/bin/../share/java/kafka/javax.activation-api-1.2.0.jar:/usr/bin/../share/java/kafka/connect-mirror-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/commons-cli-1.4.jar:/usr/bin/../share/java/kafka/slf4j-reload4j-1.7.36.jar:/usr/bin/../share/java/kafka/scala-reflect-2.13.11.jar:/usr/bin/../share/java/kafka/netty-buffer-4.1.100.Final.jar:/usr/bin/../share/java/kafka/jline-3.22.0.jar:/usr/bin/../share/java/kafka/jakarta.ws.rs-api-2.1.6.jar:/usr/bin/../share/java/kafka/jakarta.annotation-api-1.3.5.jar:/usr/bin/../share/java/kafka/scala-java8-compat_2.13-1.0.2.jar:/usr/bin/../share/java/kafka/javax.ws.rs-api-2.1.1.jar:/usr/bin/../share/java/kafka/kafka-server-common-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-tools-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-log4j-appender-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/connect-json-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/netty-resolver-4.1.100.Final.jar:/usr/bin/../share/java/kafka/hk2-api-2.6.1.jar:/usr/bin/../share/java/kafka/zstd-jni-1.5.5-1.jar:/usr/bin/../share/java/kafka/jetty-http-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jackson-dataformat-csv-2.13.5.jar:/usr/bin/../share/java/kafka/kafka.jar:/usr/bin/../share/java/kafka/jetty-servlets-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/scala-library-2.13.11.jar:/usr/bin/../share/java/kafka/jakarta.inject-2.6.1.jar:/usr/bin/../share/java/kafka/jetty-continuation-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jakarta.xml.bind-api-2.3.3.jar:/usr/bin/../share/java/kafka/connect-basic-auth-extension-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/hk2-locator-2.6.1.jar:/usr/bin/../share/java/kafka/reflections-0.10.2.jar:/usr/bin/../share/java/kafka/slf4j-api-1.7.36.jar:/usr/bin/../share/java/kafka/jetty-server-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/paranamer-2.8.jar:/usr/bin/../share/java/kafka/commons-beanutils-1.9.4.jar:/usr/bin/../share/java/kafka/jaxb-api-2.3.1.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-2.39.1.jar:/usr/bin/../share/java/kafka/hk2-utils-2.6.1.jar:/usr/bin/../share/java/kafka/jackson-module-scala_2.13-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-streams-examples-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/reload4j-1.2.25.jar:/usr/bin/../share/java/kafka/connect-transforms-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/jackson-core-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-hk2-2.39.1.jar:/usr/bin/../share/java/kafka/jackson-databind-2.13.5.jar:/usr/bin/../share/java/kafka/jersey-client-2.39.1.jar:/usr/bin/../share/java/kafka/jetty-io-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/osgi-resource-locator-1.0.3.jar:/usr/bin/../share/java/kafka/connect-api-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/commons-digester-2.1.jar:/usr/bin/../share/java/kafka/netty-transport-native-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/argparse4j-0.7.0.jar:/usr/bin/../share/java/kafka/jackson-datatype-jdk8-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-raft-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-scala_2.13-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-group-coordinator-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/audience-annotations-0.12.0.jar:/usr/bin/../share/java/kafka/jackson-module-jaxb-annotations-2.13.5.jar:/usr/bin/../share/java/kafka/javax.annotation-api-1.3.2.jar:/usr/bin/../share/java/kafka/kafka-storage-api-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/maven-artifact-3.8.8.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-json-provider-2.13.5.jar:/usr/bin/../share/java/kafka/kafka-metadata-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/jakarta.validation-api-2.0.2.jar:/usr/bin/../share/java/kafka/jersey-server-2.39.1.jar:/usr/bin/../share/java/kafka/commons-lang3-3.8.1.jar:/usr/bin/../share/java/kafka/kafka-streams-test-utils-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/jopt-simple-5.0.4.jar:/usr/bin/../share/java/kafka/error_prone_annotations-2.10.0.jar:/usr/bin/../share/java/kafka/lz4-java-1.8.0.jar:/usr/bin/../share/java/kafka/jakarta.activation-api-1.2.2.jar:/usr/bin/../share/java/kafka/jersey-container-servlet-core-2.39.1.jar:/usr/bin/../share/java/kafka/checker-qual-3.19.0.jar:/usr/bin/../share/java/kafka/connect-mirror-client-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/pcollections-4.0.1.jar:/usr/bin/../share/java/kafka/jackson-jaxrs-base-2.13.5.jar:/usr/bin/../share/java/kafka/jetty-client-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jetty-util-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/kafka-tools-api-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-streams-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka_2.13-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-clients-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/commons-logging-1.2.jar:/usr/bin/../share/java/kafka/jsr305-3.0.2.jar:/usr/bin/../share/java/kafka/trogdor-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-shell-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/netty-codec-4.1.100.Final.jar:/usr/bin/../share/java/kafka/netty-transport-native-unix-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/scala-collection-compat_2.13-2.10.0.jar:/usr/bin/../share/java/kafka/metrics-core-2.2.0.jar:/usr/bin/../share/java/kafka/zookeeper-jute-3.8.3.jar:/usr/bin/../share/java/kafka/connect-runtime-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/kafka-storage-7.6.0-ccs.jar:/usr/bin/../share/java/kafka/netty-handler-4.1.100.Final.jar:/usr/bin/../share/java/kafka/commons-collections-3.2.2.jar:/usr/bin/../share/java/kafka/javassist-3.29.2-GA.jar:/usr/bin/../share/java/kafka/caffeine-2.9.3.jar:/usr/bin/../share/java/kafka/plexus-utils-3.3.1.jar:/usr/bin/../share/java/kafka/zookeeper-3.8.3.jar:/usr/bin/../share/java/kafka/activation-1.1.1.jar:/usr/bin/../share/java/kafka/netty-common-4.1.100.Final.jar:/usr/bin/../share/java/kafka/metrics-core-4.1.12.1.jar:/usr/bin/../share/java/kafka/jetty-servlet-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/jetty-util-ajax-9.4.53.v20231009.jar:/usr/bin/../share/java/kafka/snappy-java-1.1.10.5.jar:/usr/bin/../share/java/kafka/netty-transport-classes-epoll-4.1.100.Final.jar:/usr/bin/../share/java/kafka/jetty-security-9.4.53.v20231009.jar:/usr/bin/../share/java/confluent-telemetry/* (org.apache.zookeeper.ZooKeeper) 23:16:33 kafka | [2024-03-01 23:14:06,194] INFO Client environment:java.library.path=/usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) 23:16:33 kafka | [2024-03-01 23:14:06,194] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) 23:16:33 kafka | [2024-03-01 23:14:06,194] INFO Client environment:java.compiler= (org.apache.zookeeper.ZooKeeper) 23:16:33 kafka | [2024-03-01 23:14:06,194] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) 23:16:33 kafka | [2024-03-01 23:14:06,194] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) 23:16:33 kafka | [2024-03-01 23:14:06,194] INFO Client environment:os.version=4.15.0-192-generic (org.apache.zookeeper.ZooKeeper) 23:16:33 kafka | [2024-03-01 23:14:06,194] INFO Client environment:user.name=appuser (org.apache.zookeeper.ZooKeeper) 23:16:33 kafka | [2024-03-01 23:14:06,194] INFO Client environment:user.home=/home/appuser (org.apache.zookeeper.ZooKeeper) 23:16:33 kafka | [2024-03-01 23:14:06,194] INFO Client environment:user.dir=/home/appuser (org.apache.zookeeper.ZooKeeper) 23:16:33 kafka | [2024-03-01 23:14:06,194] INFO Client environment:os.memory.free=1007MB (org.apache.zookeeper.ZooKeeper) 23:16:33 kafka | [2024-03-01 23:14:06,194] INFO Client environment:os.memory.max=1024MB (org.apache.zookeeper.ZooKeeper) 23:16:33 kafka | [2024-03-01 23:14:06,194] INFO Client environment:os.memory.total=1024MB (org.apache.zookeeper.ZooKeeper) 23:16:33 kafka | [2024-03-01 23:14:06,196] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=18000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@1f6c9cd8 (org.apache.zookeeper.ZooKeeper) 23:16:33 kafka | [2024-03-01 23:14:06,199] INFO jute.maxbuffer value is 4194304 Bytes (org.apache.zookeeper.ClientCnxnSocket) 23:16:33 kafka | [2024-03-01 23:14:06,205] INFO zookeeper.request.timeout value is 0. feature enabled=false (org.apache.zookeeper.ClientCnxn) 23:16:33 mariadb | 2024-03-01 23:13:58+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.10.2+maria~ubu2204 started. 23:16:33 mariadb | 2024-03-01 23:13:58+00:00 [Note] [Entrypoint]: Switching to dedicated user 'mysql' 23:16:33 mariadb | 2024-03-01 23:13:58+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.10.2+maria~ubu2204 started. 23:16:33 mariadb | 2024-03-01 23:13:58+00:00 [Note] [Entrypoint]: Initializing database files 23:16:33 mariadb | 2024-03-01 23:13:58 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) 23:16:33 mariadb | 2024-03-01 23:13:58 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF 23:16:33 mariadb | 2024-03-01 23:13:58 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. 23:16:33 mariadb | 23:16:33 mariadb | 23:16:33 mariadb | PLEASE REMEMBER TO SET A PASSWORD FOR THE MariaDB root USER ! 23:16:33 mariadb | To do so, start the server, then issue the following command: 23:16:33 mariadb | 23:16:33 mariadb | '/usr/bin/mysql_secure_installation' 23:16:33 mariadb | 23:16:33 mariadb | which will also give you the option of removing the test 23:16:33 mariadb | databases and anonymous user created by default. This is 23:16:33 mariadb | strongly recommended for production servers. 23:16:33 mariadb | 23:16:33 mariadb | See the MariaDB Knowledgebase at https://mariadb.com/kb 23:16:33 mariadb | 23:16:33 mariadb | Please report any problems at https://mariadb.org/jira 23:16:33 mariadb | 23:16:33 mariadb | The latest information about MariaDB is available at https://mariadb.org/. 23:16:33 mariadb | 23:16:33 mariadb | Consider joining MariaDB's strong and vibrant community: 23:16:33 mariadb | https://mariadb.org/get-involved/ 23:16:33 mariadb | 23:16:33 mariadb | 2024-03-01 23:14:00+00:00 [Note] [Entrypoint]: Database files initialized 23:16:33 mariadb | 2024-03-01 23:14:00+00:00 [Note] [Entrypoint]: Starting temporary server 23:16:33 mariadb | 2024-03-01 23:14:00+00:00 [Note] [Entrypoint]: Waiting for server startup 23:16:33 mariadb | 2024-03-01 23:14:00 0 [Note] mariadbd (server 10.10.2-MariaDB-1:10.10.2+maria~ubu2204) starting as process 99 ... 23:16:33 mariadb | 2024-03-01 23:14:00 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 23:16:33 mariadb | 2024-03-01 23:14:00 0 [Note] InnoDB: Number of transaction pools: 1 23:16:33 mariadb | 2024-03-01 23:14:00 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions 23:16:33 mariadb | 2024-03-01 23:14:00 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts) 23:16:33 mariadb | 2024-03-01 23:14:00 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) 23:16:33 mariadb | 2024-03-01 23:14:00 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF 23:16:33 mariadb | 2024-03-01 23:14:00 0 [Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB 23:16:33 mariadb | 2024-03-01 23:14:00 0 [Note] InnoDB: Completed initialization of buffer pool 23:16:33 mariadb | 2024-03-01 23:14:00 0 [Note] InnoDB: File system buffers for log disabled (block size=512 bytes) 23:16:33 mariadb | 2024-03-01 23:14:00 0 [Note] InnoDB: 128 rollback segments are active. 23:16:33 mariadb | 2024-03-01 23:14:00 0 [Note] InnoDB: Setting file './ibtmp1' size to 12.000MiB. Physically writing the file full; Please wait ... 23:16:33 mariadb | 2024-03-01 23:14:00 0 [Note] InnoDB: File './ibtmp1' size is now 12.000MiB. 23:16:33 mariadb | 2024-03-01 23:14:00 0 [Note] InnoDB: log sequence number 46590; transaction id 14 23:16:33 mariadb | 2024-03-01 23:14:00 0 [Note] Plugin 'FEEDBACK' is disabled. 23:16:33 mariadb | 2024-03-01 23:14:00 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. 23:16:33 mariadb | 2024-03-01 23:14:00 0 [Warning] 'user' entry 'root@mariadb' ignored in --skip-name-resolve mode. 23:16:33 mariadb | 2024-03-01 23:14:00 0 [Warning] 'proxies_priv' entry '@% root@mariadb' ignored in --skip-name-resolve mode. 23:16:33 mariadb | 2024-03-01 23:14:00 0 [Note] mariadbd: ready for connections. 23:16:33 mariadb | Version: '10.10.2-MariaDB-1:10.10.2+maria~ubu2204' socket: '/run/mysqld/mysqld.sock' port: 0 mariadb.org binary distribution 23:16:33 mariadb | 2024-03-01 23:14:01+00:00 [Note] [Entrypoint]: Temporary server started. 23:16:33 mariadb | 2024-03-01 23:14:03+00:00 [Note] [Entrypoint]: Creating user policy_user 23:16:33 mariadb | 2024-03-01 23:14:03+00:00 [Note] [Entrypoint]: Securing system users (equivalent to running mysql_secure_installation) 23:16:33 mariadb | 23:16:33 mariadb | 2024-03-01 23:14:03+00:00 [Warn] [Entrypoint]: /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/db.conf 23:16:33 mariadb | 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.572670359Z level=info msg="Migration successfully executed" id="create short_url table v1" duration=1.350254ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.579575329Z level=info msg="Executing migration" id="add index short_url.org_id-uid" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.580870703Z level=info msg="Migration successfully executed" id="add index short_url.org_id-uid" duration=1.291004ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.586964031Z level=info msg="Executing migration" id="alter table short_url alter column created_by type to bigint" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.587028471Z level=info msg="Migration successfully executed" id="alter table short_url alter column created_by type to bigint" duration=64.97µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.596234397Z level=info msg="Executing migration" id="delete alert_definition table" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.596391778Z level=info msg="Migration successfully executed" id="delete alert_definition table" duration=131.381µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.601736444Z level=info msg="Executing migration" id="recreate alert_definition table" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.602794636Z level=info msg="Migration successfully executed" id="recreate alert_definition table" duration=1.057772ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.608197932Z level=info msg="Executing migration" id="add index in alert_definition on org_id and title columns" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.609482406Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and title columns" duration=1.283864ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.615678164Z level=info msg="Executing migration" id="add index in alert_definition on org_id and uid columns" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.616936357Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and uid columns" duration=1.263263ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.623826088Z level=info msg="Executing migration" id="alter alert_definition table data column to mediumtext in mysql" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.623893708Z level=info msg="Migration successfully executed" id="alter alert_definition table data column to mediumtext in mysql" duration=68.06µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.632194992Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and title columns" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.634078018Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and title columns" duration=1.884286ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.638770711Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and uid columns" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.640437336Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and uid columns" duration=1.672765ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.646522413Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and title columns" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.648013628Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and title columns" duration=1.490615ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.655259208Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and uid columns" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.656196941Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and uid columns" duration=937.323µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.661582657Z level=info msg="Executing migration" id="Add column paused in alert_definition" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.669288189Z level=info msg="Migration successfully executed" id="Add column paused in alert_definition" duration=7.705022ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.676733851Z level=info msg="Executing migration" id="drop alert_definition table" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.677623373Z level=info msg="Migration successfully executed" id="drop alert_definition table" duration=883.322µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.683568971Z level=info msg="Executing migration" id="delete alert_definition_version table" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.683696131Z level=info msg="Migration successfully executed" id="delete alert_definition_version table" duration=127.77µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.688706736Z level=info msg="Executing migration" id="recreate alert_definition_version table" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.689984189Z level=info msg="Migration successfully executed" id="recreate alert_definition_version table" duration=1.277133ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.698825215Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_id and version columns" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.699824668Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_id and version columns" duration=999.123µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.706165986Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_uid and version columns" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.707774371Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_uid and version columns" duration=1.607545ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.714786771Z level=info msg="Executing migration" id="alter alert_definition_version table data column to mediumtext in mysql" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.714885141Z level=info msg="Migration successfully executed" id="alter alert_definition_version table data column to mediumtext in mysql" duration=98.81µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.720158467Z level=info msg="Executing migration" id="drop alert_definition_version table" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.721553771Z level=info msg="Migration successfully executed" id="drop alert_definition_version table" duration=1.394914ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.725495082Z level=info msg="Executing migration" id="create alert_instance table" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.726827576Z level=info msg="Migration successfully executed" id="create alert_instance table" duration=1.336064ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.733499716Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.734476008Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" duration=975.822µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.740593376Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, current_state columns" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.742153701Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, current_state columns" duration=1.559675ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.747584896Z level=info msg="Executing migration" id="add column current_state_end to alert_instance" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.753245233Z level=info msg="Migration successfully executed" id="add column current_state_end to alert_instance" duration=5.659817ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.759816601Z level=info msg="Executing migration" id="remove index def_org_id, def_uid, current_state on alert_instance" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.760708275Z level=info msg="Migration successfully executed" id="remove index def_org_id, def_uid, current_state on alert_instance" duration=891.664µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.764159774Z level=info msg="Executing migration" id="remove index def_org_id, current_state on alert_instance" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.765560718Z level=info msg="Migration successfully executed" id="remove index def_org_id, current_state on alert_instance" duration=1.400784ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.772148188Z level=info msg="Executing migration" id="rename def_org_id to rule_org_id in alert_instance" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.809860297Z level=info msg="Migration successfully executed" id="rename def_org_id to rule_org_id in alert_instance" duration=37.713559ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.816602777Z level=info msg="Executing migration" id="rename def_uid to rule_uid in alert_instance" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.854503266Z level=info msg="Migration successfully executed" id="rename def_uid to rule_uid in alert_instance" duration=37.900339ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.85930747Z level=info msg="Executing migration" id="add index rule_org_id, rule_uid, current_state on alert_instance" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.860063362Z level=info msg="Migration successfully executed" id="add index rule_org_id, rule_uid, current_state on alert_instance" duration=753.502µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.864600956Z level=info msg="Executing migration" id="add index rule_org_id, current_state on alert_instance" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.8662612Z level=info msg="Migration successfully executed" id="add index rule_org_id, current_state on alert_instance" duration=1.659434ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.871086264Z level=info msg="Executing migration" id="add current_reason column related to current_state" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.876694771Z level=info msg="Migration successfully executed" id="add current_reason column related to current_state" duration=5.608587ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.880545483Z level=info msg="Executing migration" id="add result_fingerprint column to alert_instance" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.886194349Z level=info msg="Migration successfully executed" id="add result_fingerprint column to alert_instance" duration=5.648546ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.896139337Z level=info msg="Executing migration" id="create alert_rule table" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.89717024Z level=info msg="Migration successfully executed" id="create alert_rule table" duration=1.030513ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.905665264Z level=info msg="Executing migration" id="add index in alert_rule on org_id and title columns" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.90748128Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and title columns" duration=1.816356ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.914377971Z level=info msg="Executing migration" id="add index in alert_rule on org_id and uid columns" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.915948205Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and uid columns" duration=1.570074ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.923286636Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.924946411Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" duration=1.660925ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.931292419Z level=info msg="Executing migration" id="alter alert_rule table data column to mediumtext in mysql" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.93135528Z level=info msg="Migration successfully executed" id="alter alert_rule table data column to mediumtext in mysql" duration=63.381µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.937176797Z level=info msg="Executing migration" id="add column for to alert_rule" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.943423704Z level=info msg="Migration successfully executed" id="add column for to alert_rule" duration=6.246697ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.951107627Z level=info msg="Executing migration" id="add column annotations to alert_rule" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.961021825Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule" duration=9.914108ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.966675281Z level=info msg="Executing migration" id="add column labels to alert_rule" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.970840294Z level=info msg="Migration successfully executed" id="add column labels to alert_rule" duration=4.162893ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.976639581Z level=info msg="Executing migration" id="remove unique index from alert_rule on org_id, title columns" 23:16:33 grafana | logger=migrator t=2024-03-01T23:13:59.977584583Z level=info msg="Migration successfully executed" id="remove unique index from alert_rule on org_id, title columns" duration=944.852µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.040634047Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespase_uid and title columns" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.042743285Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespase_uid and title columns" duration=2.111508ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.047939635Z level=info msg="Executing migration" id="add dashboard_uid column to alert_rule" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.055574356Z level=info msg="Migration successfully executed" id="add dashboard_uid column to alert_rule" duration=7.634541ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.061082187Z level=info msg="Executing migration" id="add panel_id column to alert_rule" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.070014782Z level=info msg="Migration successfully executed" id="add panel_id column to alert_rule" duration=8.936525ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.078905907Z level=info msg="Executing migration" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.079682151Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" duration=776.064µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.090555793Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.099580039Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule" duration=9.024766ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.103257444Z level=info msg="Executing migration" id="add is_paused column to alert_rule table" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.109159387Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule table" duration=5.904753ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.11250177Z level=info msg="Executing migration" id="fix is_paused column for alert_rule table" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.112594481Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule table" duration=93.461µs 23:16:33 policy-apex-pdp | connections.max.idle.ms = 540000 23:16:33 policy-apex-pdp | default.api.timeout.ms = 60000 23:16:33 policy-apex-pdp | enable.auto.commit = true 23:16:33 policy-apex-pdp | exclude.internal.topics = true 23:16:33 policy-apex-pdp | fetch.max.bytes = 52428800 23:16:33 policy-apex-pdp | fetch.max.wait.ms = 500 23:16:33 policy-apex-pdp | fetch.min.bytes = 1 23:16:33 policy-apex-pdp | group.id = d5634529-e7dd-41ae-91a6-87fa8cb77024 23:16:33 policy-apex-pdp | group.instance.id = null 23:16:33 policy-apex-pdp | heartbeat.interval.ms = 3000 23:16:33 policy-apex-pdp | interceptor.classes = [] 23:16:33 policy-apex-pdp | internal.leave.group.on.close = true 23:16:33 policy-apex-pdp | internal.throw.on.fetch.stable.offset.unsupported = false 23:16:33 policy-apex-pdp | isolation.level = read_uncommitted 23:16:33 policy-apex-pdp | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:33 policy-apex-pdp | max.partition.fetch.bytes = 1048576 23:16:33 policy-apex-pdp | max.poll.interval.ms = 300000 23:16:33 policy-apex-pdp | max.poll.records = 500 23:16:33 policy-apex-pdp | metadata.max.age.ms = 300000 23:16:33 policy-apex-pdp | metric.reporters = [] 23:16:33 policy-apex-pdp | metrics.num.samples = 2 23:16:33 policy-apex-pdp | metrics.recording.level = INFO 23:16:33 policy-apex-pdp | metrics.sample.window.ms = 30000 23:16:33 policy-apex-pdp | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 23:16:33 policy-apex-pdp | receive.buffer.bytes = 65536 23:16:33 policy-apex-pdp | reconnect.backoff.max.ms = 1000 23:16:33 policy-apex-pdp | reconnect.backoff.ms = 50 23:16:33 policy-apex-pdp | request.timeout.ms = 30000 23:16:33 policy-apex-pdp | retry.backoff.ms = 100 23:16:33 policy-apex-pdp | sasl.client.callback.handler.class = null 23:16:33 policy-apex-pdp | sasl.jaas.config = null 23:16:33 policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:33 policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 23:16:33 policy-apex-pdp | sasl.kerberos.service.name = null 23:16:33 policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:33 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:33 policy-apex-pdp | sasl.login.callback.handler.class = null 23:16:33 policy-apex-pdp | sasl.login.class = null 23:16:33 policy-apex-pdp | sasl.login.connect.timeout.ms = null 23:16:33 policy-apex-pdp | sasl.login.read.timeout.ms = null 23:16:33 policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 23:16:33 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 23:16:33 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 23:16:33 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 23:16:33 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 23:16:33 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 23:16:33 policy-apex-pdp | sasl.mechanism = GSSAPI 23:16:33 policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 23:16:33 policy-apex-pdp | sasl.oauthbearer.expected.audience = null 23:16:33 policy-apex-pdp | sasl.oauthbearer.expected.issuer = null 23:16:33 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:33 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:33 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:33 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null 23:16:33 policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope 23:16:33 kafka | [2024-03-01 23:14:06,210] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient) 23:16:33 kafka | [2024-03-01 23:14:06,214] INFO Opening socket connection to server zookeeper/172.17.0.5:2181. (org.apache.zookeeper.ClientCnxn) 23:16:33 kafka | [2024-03-01 23:14:06,221] INFO Socket connection established, initiating session, client: /172.17.0.7:38814, server: zookeeper/172.17.0.5:2181 (org.apache.zookeeper.ClientCnxn) 23:16:33 kafka | [2024-03-01 23:14:06,229] INFO Session establishment complete on server zookeeper/172.17.0.5:2181, session id = 0x100000371220001, negotiated timeout = 18000 (org.apache.zookeeper.ClientCnxn) 23:16:33 kafka | [2024-03-01 23:14:06,234] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient) 23:16:33 kafka | [2024-03-01 23:14:06,537] INFO Cluster ID = M5MPGqipTkazoD2lpH5myg (kafka.server.KafkaServer) 23:16:33 kafka | [2024-03-01 23:14:06,542] WARN No meta.properties file under dir /var/lib/kafka/data/meta.properties (kafka.server.BrokerMetadataCheckpoint) 23:16:33 kafka | [2024-03-01 23:14:06,588] INFO KafkaConfig values: 23:16:33 kafka | advertised.listeners = PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092 23:16:33 kafka | alter.config.policy.class.name = null 23:16:33 kafka | alter.log.dirs.replication.quota.window.num = 11 23:16:33 kafka | alter.log.dirs.replication.quota.window.size.seconds = 1 23:16:33 kafka | authorizer.class.name = 23:16:33 kafka | auto.create.topics.enable = true 23:16:33 kafka | auto.include.jmx.reporter = true 23:16:33 kafka | auto.leader.rebalance.enable = true 23:16:33 kafka | background.threads = 10 23:16:33 kafka | broker.heartbeat.interval.ms = 2000 23:16:33 kafka | broker.id = 1 23:16:33 kafka | broker.id.generation.enable = true 23:16:33 kafka | broker.rack = null 23:16:33 kafka | broker.session.timeout.ms = 9000 23:16:33 kafka | client.quota.callback.class = null 23:16:33 kafka | compression.type = producer 23:16:33 kafka | connection.failed.authentication.delay.ms = 100 23:16:33 kafka | connections.max.idle.ms = 600000 23:16:33 kafka | connections.max.reauth.ms = 0 23:16:33 kafka | control.plane.listener.name = null 23:16:33 kafka | controlled.shutdown.enable = true 23:16:33 kafka | controlled.shutdown.max.retries = 3 23:16:33 kafka | controlled.shutdown.retry.backoff.ms = 5000 23:16:33 kafka | controller.listener.names = null 23:16:33 kafka | controller.quorum.append.linger.ms = 25 23:16:33 kafka | controller.quorum.election.backoff.max.ms = 1000 23:16:33 kafka | controller.quorum.election.timeout.ms = 1000 23:16:33 kafka | controller.quorum.fetch.timeout.ms = 2000 23:16:33 kafka | controller.quorum.request.timeout.ms = 2000 23:16:33 kafka | controller.quorum.retry.backoff.ms = 20 23:16:33 kafka | controller.quorum.voters = [] 23:16:33 kafka | controller.quota.window.num = 11 23:16:33 kafka | controller.quota.window.size.seconds = 1 23:16:33 kafka | controller.socket.timeout.ms = 30000 23:16:33 kafka | create.topic.policy.class.name = null 23:16:33 kafka | default.replication.factor = 1 23:16:33 kafka | delegation.token.expiry.check.interval.ms = 3600000 23:16:33 policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub 23:16:33 policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null 23:16:33 policy-apex-pdp | security.protocol = PLAINTEXT 23:16:33 policy-apex-pdp | security.providers = null 23:16:33 policy-apex-pdp | send.buffer.bytes = 131072 23:16:33 policy-apex-pdp | session.timeout.ms = 45000 23:16:33 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 23:16:33 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 23:16:33 policy-apex-pdp | ssl.cipher.suites = null 23:16:33 policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:33 policy-apex-pdp | ssl.endpoint.identification.algorithm = https 23:16:33 policy-apex-pdp | ssl.engine.factory.class = null 23:16:33 policy-apex-pdp | ssl.key.password = null 23:16:33 policy-apex-pdp | ssl.keymanager.algorithm = SunX509 23:16:33 policy-apex-pdp | ssl.keystore.certificate.chain = null 23:16:33 policy-apex-pdp | ssl.keystore.key = null 23:16:33 policy-apex-pdp | ssl.keystore.location = null 23:16:33 policy-apex-pdp | ssl.keystore.password = null 23:16:33 policy-apex-pdp | ssl.keystore.type = JKS 23:16:33 policy-apex-pdp | ssl.protocol = TLSv1.3 23:16:33 policy-apex-pdp | ssl.provider = null 23:16:33 policy-apex-pdp | ssl.secure.random.implementation = null 23:16:33 policy-apex-pdp | ssl.trustmanager.algorithm = PKIX 23:16:33 policy-apex-pdp | ssl.truststore.certificates = null 23:16:33 policy-apex-pdp | ssl.truststore.location = null 23:16:33 policy-apex-pdp | ssl.truststore.password = null 23:16:33 policy-apex-pdp | ssl.truststore.type = JKS 23:16:33 policy-apex-pdp | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:33 policy-apex-pdp | 23:16:33 policy-apex-pdp | [2024-03-01T23:14:37.441+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 23:16:33 policy-apex-pdp | [2024-03-01T23:14:37.441+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 23:16:33 policy-apex-pdp | [2024-03-01T23:14:37.441+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1709334877441 23:16:33 policy-apex-pdp | [2024-03-01T23:14:37.441+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-d5634529-e7dd-41ae-91a6-87fa8cb77024-2, groupId=d5634529-e7dd-41ae-91a6-87fa8cb77024] Subscribed to topic(s): policy-pdp-pap 23:16:33 policy-apex-pdp | [2024-03-01T23:14:37.442+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=3ddc54c6-966d-49c4-8dc0-dcc2b9920774, alive=false, publisher=null]]: starting 23:16:33 policy-apex-pdp | [2024-03-01T23:14:37.453+00:00|INFO|ProducerConfig|main] ProducerConfig values: 23:16:33 policy-apex-pdp | acks = -1 23:16:33 policy-apex-pdp | auto.include.jmx.reporter = true 23:16:33 policy-apex-pdp | batch.size = 16384 23:16:33 policy-apex-pdp | bootstrap.servers = [kafka:9092] 23:16:33 policy-apex-pdp | buffer.memory = 33554432 23:16:33 policy-apex-pdp | client.dns.lookup = use_all_dns_ips 23:16:33 policy-apex-pdp | client.id = producer-1 23:16:33 policy-apex-pdp | compression.type = none 23:16:33 policy-apex-pdp | connections.max.idle.ms = 540000 23:16:33 policy-apex-pdp | delivery.timeout.ms = 120000 23:16:33 policy-apex-pdp | enable.idempotence = true 23:16:33 policy-apex-pdp | interceptor.classes = [] 23:16:33 policy-apex-pdp | key.serializer = class org.apache.kafka.common.serialization.StringSerializer 23:16:33 policy-apex-pdp | linger.ms = 0 23:16:33 policy-apex-pdp | max.block.ms = 60000 23:16:33 policy-apex-pdp | max.in.flight.requests.per.connection = 5 23:16:33 policy-apex-pdp | max.request.size = 1048576 23:16:33 policy-apex-pdp | metadata.max.age.ms = 300000 23:16:33 policy-apex-pdp | metadata.max.idle.ms = 300000 23:16:33 policy-apex-pdp | metric.reporters = [] 23:16:33 policy-apex-pdp | metrics.num.samples = 2 23:16:33 policy-apex-pdp | metrics.recording.level = INFO 23:16:33 policy-apex-pdp | metrics.sample.window.ms = 30000 23:16:33 policy-apex-pdp | partitioner.adaptive.partitioning.enable = true 23:16:33 policy-apex-pdp | partitioner.availability.timeout.ms = 0 23:16:33 policy-apex-pdp | partitioner.class = null 23:16:33 policy-apex-pdp | partitioner.ignore.keys = false 23:16:33 policy-apex-pdp | receive.buffer.bytes = 32768 23:16:33 policy-apex-pdp | reconnect.backoff.max.ms = 1000 23:16:33 policy-apex-pdp | reconnect.backoff.ms = 50 23:16:33 policy-apex-pdp | request.timeout.ms = 30000 23:16:33 policy-apex-pdp | retries = 2147483647 23:16:33 policy-apex-pdp | retry.backoff.ms = 100 23:16:33 policy-apex-pdp | sasl.client.callback.handler.class = null 23:16:33 policy-apex-pdp | sasl.jaas.config = null 23:16:33 policy-apex-pdp | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:33 policy-apex-pdp | sasl.kerberos.min.time.before.relogin = 60000 23:16:33 policy-apex-pdp | sasl.kerberos.service.name = null 23:16:33 policy-apex-pdp | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:33 policy-apex-pdp | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:33 kafka | delegation.token.expiry.time.ms = 86400000 23:16:33 kafka | delegation.token.master.key = null 23:16:33 kafka | delegation.token.max.lifetime.ms = 604800000 23:16:33 kafka | delegation.token.secret.key = null 23:16:33 kafka | delete.records.purgatory.purge.interval.requests = 1 23:16:33 kafka | delete.topic.enable = true 23:16:33 kafka | early.start.listeners = null 23:16:33 kafka | fetch.max.bytes = 57671680 23:16:33 kafka | fetch.purgatory.purge.interval.requests = 1000 23:16:33 kafka | group.consumer.assignors = [org.apache.kafka.coordinator.group.assignor.RangeAssignor] 23:16:33 kafka | group.consumer.heartbeat.interval.ms = 5000 23:16:33 kafka | group.consumer.max.heartbeat.interval.ms = 15000 23:16:33 kafka | group.consumer.max.session.timeout.ms = 60000 23:16:33 kafka | group.consumer.max.size = 2147483647 23:16:33 kafka | group.consumer.min.heartbeat.interval.ms = 5000 23:16:33 kafka | group.consumer.min.session.timeout.ms = 45000 23:16:33 kafka | group.consumer.session.timeout.ms = 45000 23:16:33 kafka | group.coordinator.new.enable = false 23:16:33 kafka | group.coordinator.threads = 1 23:16:33 kafka | group.initial.rebalance.delay.ms = 3000 23:16:33 kafka | group.max.session.timeout.ms = 1800000 23:16:33 kafka | group.max.size = 2147483647 23:16:33 kafka | group.min.session.timeout.ms = 6000 23:16:33 kafka | initial.broker.registration.timeout.ms = 60000 23:16:33 kafka | inter.broker.listener.name = PLAINTEXT 23:16:33 kafka | inter.broker.protocol.version = 3.6-IV2 23:16:33 kafka | kafka.metrics.polling.interval.secs = 10 23:16:33 kafka | kafka.metrics.reporters = [] 23:16:33 kafka | leader.imbalance.check.interval.seconds = 300 23:16:33 kafka | leader.imbalance.per.broker.percentage = 10 23:16:33 kafka | listener.security.protocol.map = PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT 23:16:33 kafka | listeners = PLAINTEXT://0.0.0.0:9092,PLAINTEXT_HOST://0.0.0.0:29092 23:16:33 kafka | log.cleaner.backoff.ms = 15000 23:16:33 kafka | log.cleaner.dedupe.buffer.size = 134217728 23:16:33 kafka | log.cleaner.delete.retention.ms = 86400000 23:16:33 kafka | log.cleaner.enable = true 23:16:33 kafka | log.cleaner.io.buffer.load.factor = 0.9 23:16:33 kafka | log.cleaner.io.buffer.size = 524288 23:16:33 kafka | log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 23:16:33 kafka | log.cleaner.max.compaction.lag.ms = 9223372036854775807 23:16:33 kafka | log.cleaner.min.cleanable.ratio = 0.5 23:16:33 kafka | log.cleaner.min.compaction.lag.ms = 0 23:16:33 kafka | log.cleaner.threads = 1 23:16:33 kafka | log.cleanup.policy = [delete] 23:16:33 kafka | log.dir = /tmp/kafka-logs 23:16:33 kafka | log.dirs = /var/lib/kafka/data 23:16:33 kafka | log.flush.interval.messages = 9223372036854775807 23:16:33 kafka | log.flush.interval.ms = null 23:16:33 kafka | log.flush.offset.checkpoint.interval.ms = 60000 23:16:33 kafka | log.flush.scheduler.interval.ms = 9223372036854775807 23:16:33 kafka | log.flush.start.offset.checkpoint.interval.ms = 60000 23:16:33 kafka | log.index.interval.bytes = 4096 23:16:33 kafka | log.index.size.max.bytes = 10485760 23:16:33 kafka | log.local.retention.bytes = -2 23:16:33 kafka | log.local.retention.ms = -2 23:16:33 kafka | log.message.downconversion.enable = true 23:16:33 kafka | log.message.format.version = 3.0-IV1 23:16:33 kafka | log.message.timestamp.after.max.ms = 9223372036854775807 23:16:33 kafka | log.message.timestamp.before.max.ms = 9223372036854775807 23:16:33 kafka | log.message.timestamp.difference.max.ms = 9223372036854775807 23:16:33 kafka | log.message.timestamp.type = CreateTime 23:16:33 kafka | log.preallocate = false 23:16:33 kafka | log.retention.bytes = -1 23:16:33 kafka | log.retention.check.interval.ms = 300000 23:16:33 kafka | log.retention.hours = 168 23:16:33 kafka | log.retention.minutes = null 23:16:33 kafka | log.retention.ms = null 23:16:33 kafka | log.roll.hours = 168 23:16:33 kafka | log.roll.jitter.hours = 0 23:16:33 kafka | log.roll.jitter.ms = null 23:16:33 kafka | log.roll.ms = null 23:16:33 kafka | log.segment.bytes = 1073741824 23:16:33 kafka | log.segment.delete.delay.ms = 60000 23:16:33 kafka | max.connection.creation.rate = 2147483647 23:16:33 kafka | max.connections = 2147483647 23:16:33 kafka | max.connections.per.ip = 2147483647 23:16:33 kafka | max.connections.per.ip.overrides = 23:16:33 policy-pap | Waiting for mariadb port 3306... 23:16:33 policy-pap | mariadb (172.17.0.4:3306) open 23:16:33 policy-pap | Waiting for kafka port 9092... 23:16:33 policy-pap | kafka (172.17.0.7:9092) open 23:16:33 policy-pap | Waiting for api port 6969... 23:16:33 policy-pap | api (172.17.0.9:6969) open 23:16:33 policy-pap | Policy pap config file: /opt/app/policy/pap/etc/papParameters.yaml 23:16:33 policy-pap | PDP group configuration file: /opt/app/policy/pap/etc/mounted/groups.json 23:16:33 policy-pap | 23:16:33 policy-pap | . ____ _ __ _ _ 23:16:33 policy-pap | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ 23:16:33 policy-pap | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ 23:16:33 policy-pap | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) 23:16:33 policy-pap | ' |____| .__|_| |_|_| |_\__, | / / / / 23:16:33 policy-pap | =========|_|==============|___/=/_/_/_/ 23:16:33 policy-pap | :: Spring Boot :: (v3.1.8) 23:16:33 policy-pap | 23:16:33 policy-pap | [2024-03-01T23:14:26.108+00:00|INFO|PolicyPapApplication|main] Starting PolicyPapApplication using Java 17.0.10 with PID 35 (/app/pap.jar started by policy in /opt/app/policy/pap/bin) 23:16:33 policy-pap | [2024-03-01T23:14:26.110+00:00|INFO|PolicyPapApplication|main] No active profile set, falling back to 1 default profile: "default" 23:16:33 policy-pap | [2024-03-01T23:14:27.919+00:00|INFO|RepositoryConfigurationDelegate|main] Bootstrapping Spring Data JPA repositories in DEFAULT mode. 23:16:33 policy-pap | [2024-03-01T23:14:28.032+00:00|INFO|RepositoryConfigurationDelegate|main] Finished Spring Data repository scanning in 100 ms. Found 7 JPA repository interfaces. 23:16:33 policy-pap | [2024-03-01T23:14:28.446+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.exception.ServiceExceptionHandler 23:16:33 policy-pap | [2024-03-01T23:14:28.447+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.exception.ServiceExceptionHandler 23:16:33 policy-pap | [2024-03-01T23:14:29.081+00:00|INFO|TomcatWebServer|main] Tomcat initialized with port(s): 6969 (http) 23:16:33 policy-pap | [2024-03-01T23:14:29.091+00:00|INFO|Http11NioProtocol|main] Initializing ProtocolHandler ["http-nio-6969"] 23:16:33 policy-pap | [2024-03-01T23:14:29.092+00:00|INFO|StandardService|main] Starting service [Tomcat] 23:16:33 policy-pap | [2024-03-01T23:14:29.093+00:00|INFO|StandardEngine|main] Starting Servlet engine: [Apache Tomcat/10.1.18] 23:16:33 policy-pap | [2024-03-01T23:14:29.189+00:00|INFO|[/policy/pap/v1]|main] Initializing Spring embedded WebApplicationContext 23:16:33 policy-pap | [2024-03-01T23:14:29.190+00:00|INFO|ServletWebServerApplicationContext|main] Root WebApplicationContext: initialization completed in 3001 ms 23:16:33 policy-pap | [2024-03-01T23:14:29.606+00:00|INFO|LogHelper|main] HHH000204: Processing PersistenceUnitInfo [name: default] 23:16:33 policy-pap | [2024-03-01T23:14:29.691+00:00|INFO|Version|main] HHH000412: Hibernate ORM core version 6.3.0.CR1 23:16:33 policy-pap | [2024-03-01T23:14:29.695+00:00|INFO|Environment|main] HHH000406: Using bytecode reflection optimizer 23:16:33 policy-pap | [2024-03-01T23:14:29.744+00:00|INFO|RegionFactoryInitiator|main] HHH000026: Second-level cache disabled 23:16:33 policy-pap | [2024-03-01T23:14:30.119+00:00|INFO|SpringPersistenceUnitInfo|main] No LoadTimeWeaver setup: ignoring JPA class transformer 23:16:33 policy-pap | [2024-03-01T23:14:30.138+00:00|INFO|HikariDataSource|main] HikariPool-1 - Starting... 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.115816035Z level=info msg="Executing migration" id="create alert_rule_version table" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.116793928Z level=info msg="Migration successfully executed" id="create alert_rule_version table" duration=977.674µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.122879582Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.12461906Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=1.738808ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.128558245Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.130388033Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" duration=1.829128ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.138356825Z level=info msg="Executing migration" id="alter alert_rule_version table data column to mediumtext in mysql" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.138492395Z level=info msg="Migration successfully executed" id="alter alert_rule_version table data column to mediumtext in mysql" duration=136.58µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.14220231Z level=info msg="Executing migration" id="add column for to alert_rule_version" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.150239792Z level=info msg="Migration successfully executed" id="add column for to alert_rule_version" duration=8.030982ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.153620666Z level=info msg="Executing migration" id="add column annotations to alert_rule_version" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.15968976Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule_version" duration=6.068714ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.162955513Z level=info msg="Executing migration" id="add column labels to alert_rule_version" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.169001397Z level=info msg="Migration successfully executed" id="add column labels to alert_rule_version" duration=6.045574ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.174837621Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule_version" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.180914165Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule_version" duration=6.076164ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.184500659Z level=info msg="Executing migration" id="add is_paused column to alert_rule_versions table" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.190559884Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule_versions table" duration=6.057595ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.194199948Z level=info msg="Executing migration" id="fix is_paused column for alert_rule_version table" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.194291859Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule_version table" duration=92.231µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.202796263Z level=info msg="Executing migration" id=create_alert_configuration_table 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.203919397Z level=info msg="Migration successfully executed" id=create_alert_configuration_table duration=1.122994ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.207744772Z level=info msg="Executing migration" id="Add column default in alert_configuration" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.215226583Z level=info msg="Migration successfully executed" id="Add column default in alert_configuration" duration=7.482581ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.218409855Z level=info msg="Executing migration" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.218506686Z level=info msg="Migration successfully executed" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" duration=93.861µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.224097358Z level=info msg="Executing migration" id="add column org_id in alert_configuration" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.230218693Z level=info msg="Migration successfully executed" id="add column org_id in alert_configuration" duration=6.120864ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.233719907Z level=info msg="Executing migration" id="add index in alert_configuration table on org_id column" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.234815531Z level=info msg="Migration successfully executed" id="add index in alert_configuration table on org_id column" duration=1.095074ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.23939755Z level=info msg="Executing migration" id="add configuration_hash column to alert_configuration" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.245718595Z level=info msg="Migration successfully executed" id="add configuration_hash column to alert_configuration" duration=6.320385ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.250757786Z level=info msg="Executing migration" id=create_ngalert_configuration_table 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.251489828Z level=info msg="Migration successfully executed" id=create_ngalert_configuration_table duration=733.562µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.256602359Z level=info msg="Executing migration" id="add index in ngalert_configuration on org_id column" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.257629973Z level=info msg="Migration successfully executed" id="add index in ngalert_configuration on org_id column" duration=1.027004ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.260894746Z level=info msg="Executing migration" id="add column send_alerts_to in ngalert_configuration" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.271030407Z level=info msg="Migration successfully executed" id="add column send_alerts_to in ngalert_configuration" duration=10.134881ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.27427379Z level=info msg="Executing migration" id="create provenance_type table" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.274797372Z level=info msg="Migration successfully executed" id="create provenance_type table" duration=523.222µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.280492284Z level=info msg="Executing migration" id="add index to uniquify (record_key, record_type, org_id) columns" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.282150212Z level=info msg="Migration successfully executed" id="add index to uniquify (record_key, record_type, org_id) columns" duration=1.657037ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.287560143Z level=info msg="Executing migration" id="create alert_image table" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.288746037Z level=info msg="Migration successfully executed" id="create alert_image table" duration=1.190214ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.292931385Z level=info msg="Executing migration" id="add unique index on token to alert_image table" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.294468911Z level=info msg="Migration successfully executed" id="add unique index on token to alert_image table" duration=1.537716ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.309295981Z level=info msg="Executing migration" id="support longer URLs in alert_image table" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.309395561Z level=info msg="Migration successfully executed" id="support longer URLs in alert_image table" duration=100.86µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.321055288Z level=info msg="Executing migration" id=create_alert_configuration_history_table 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.323007175Z level=info msg="Migration successfully executed" id=create_alert_configuration_history_table duration=1.954247ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.334956103Z level=info msg="Executing migration" id="drop non-unique orgID index on alert_configuration" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.33675322Z level=info msg="Migration successfully executed" id="drop non-unique orgID index on alert_configuration" duration=1.796187ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.349591952Z level=info msg="Executing migration" id="drop unique orgID index on alert_configuration if exists" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.350400565Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop unique orgID index on alert_configuration if exists" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.371987302Z level=info msg="Executing migration" id="extract alertmanager configuration history to separate table" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.372824765Z level=info msg="Migration successfully executed" id="extract alertmanager configuration history to separate table" duration=836.933µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.395543326Z level=info msg="Executing migration" id="add unique index on orgID to alert_configuration" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.397314623Z level=info msg="Migration successfully executed" id="add unique index on orgID to alert_configuration" duration=1.771067ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.40897079Z level=info msg="Executing migration" id="add last_applied column to alert_configuration_history" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.419366452Z level=info msg="Migration successfully executed" id="add last_applied column to alert_configuration_history" duration=10.396132ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.431960942Z level=info msg="Executing migration" id="create library_element table v1" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.433641599Z level=info msg="Migration successfully executed" id="create library_element table v1" duration=1.680377ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.438572169Z level=info msg="Executing migration" id="add index library_element org_id-folder_id-name-kind" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.439758854Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_id-name-kind" duration=1.183395ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.446717032Z level=info msg="Executing migration" id="create library_element_connection table v1" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.448053457Z level=info msg="Migration successfully executed" id="create library_element_connection table v1" duration=1.328405ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.451881302Z level=info msg="Executing migration" id="add index library_element_connection element_id-kind-connection_id" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.453099787Z level=info msg="Migration successfully executed" id="add index library_element_connection element_id-kind-connection_id" duration=1.216165ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.458310928Z level=info msg="Executing migration" id="add unique index library_element org_id_uid" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.459497573Z level=info msg="Migration successfully executed" id="add unique index library_element org_id_uid" duration=1.186435ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.46621721Z level=info msg="Executing migration" id="increase max description length to 2048" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.466352751Z level=info msg="Migration successfully executed" id="increase max description length to 2048" duration=136.341µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.470764068Z level=info msg="Executing migration" id="alter library_element model to mediumtext" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.471046589Z level=info msg="Migration successfully executed" id="alter library_element model to mediumtext" duration=282.001µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.47619514Z level=info msg="Executing migration" id="clone move dashboard alerts to unified alerting" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.476781832Z level=info msg="Migration successfully executed" id="clone move dashboard alerts to unified alerting" duration=586.552µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.480728948Z level=info msg="Executing migration" id="create data_keys table" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.482243004Z level=info msg="Migration successfully executed" id="create data_keys table" duration=1.513206ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.489455953Z level=info msg="Executing migration" id="create secrets table" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.490830258Z level=info msg="Migration successfully executed" id="create secrets table" duration=1.359845ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.49624188Z level=info msg="Executing migration" id="rename data_keys name column to id" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.54358725Z level=info msg="Migration successfully executed" id="rename data_keys name column to id" duration=47.34672ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.560233877Z level=info msg="Executing migration" id="add name column into data_keys" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.569877666Z level=info msg="Migration successfully executed" id="add name column into data_keys" duration=9.645299ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.577016734Z level=info msg="Executing migration" id="copy data_keys id column values into name" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.577300225Z level=info msg="Migration successfully executed" id="copy data_keys id column values into name" duration=283.261µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.584635515Z level=info msg="Executing migration" id="rename data_keys name column to label" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.632242966Z level=info msg="Migration successfully executed" id="rename data_keys name column to label" duration=47.607541ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.636106671Z level=info msg="Executing migration" id="rename data_keys id column back to name" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.679818077Z level=info msg="Migration successfully executed" id="rename data_keys id column back to name" duration=43.710926ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.685216729Z level=info msg="Executing migration" id="create kv_store table v1" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.685790221Z level=info msg="Migration successfully executed" id="create kv_store table v1" duration=573.292µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.690163039Z level=info msg="Executing migration" id="add index kv_store.org_id-namespace-key" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.691928445Z level=info msg="Migration successfully executed" id="add index kv_store.org_id-namespace-key" duration=1.764426ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.696341773Z level=info msg="Executing migration" id="update dashboard_uid and panel_id from existing annotations" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.696722715Z level=info msg="Migration successfully executed" id="update dashboard_uid and panel_id from existing annotations" duration=381.152µs 23:16:33 policy-pap | [2024-03-01T23:14:30.252+00:00|INFO|HikariPool|main] HikariPool-1 - Added connection org.mariadb.jdbc.Connection@124ac145 23:16:33 policy-pap | [2024-03-01T23:14:30.254+00:00|INFO|HikariDataSource|main] HikariPool-1 - Start completed. 23:16:33 policy-pap | [2024-03-01T23:14:32.124+00:00|INFO|JtaPlatformInitiator|main] HHH000489: No JTA platform available (set 'hibernate.transaction.jta.platform' to enable JTA platform integration) 23:16:33 policy-pap | [2024-03-01T23:14:32.128+00:00|INFO|LocalContainerEntityManagerFactoryBean|main] Initialized JPA EntityManagerFactory for persistence unit 'default' 23:16:33 policy-pap | [2024-03-01T23:14:32.644+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PdpGroupRepository 23:16:33 policy-pap | [2024-03-01T23:14:33.034+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PolicyStatusRepository 23:16:33 policy-pap | [2024-03-01T23:14:33.145+00:00|WARN|LocalVariableTableParameterNameDiscoverer|main] Using deprecated '-debug' fallback for parameter name resolution. Compile the affected code with '-parameters' instead or avoid its introspection: org.onap.policy.pap.main.repository.PolicyAuditRepository 23:16:33 policy-pap | [2024-03-01T23:14:33.435+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 23:16:33 policy-pap | allow.auto.create.topics = true 23:16:33 policy-pap | auto.commit.interval.ms = 5000 23:16:33 policy-pap | auto.include.jmx.reporter = true 23:16:33 policy-pap | auto.offset.reset = latest 23:16:33 policy-pap | bootstrap.servers = [kafka:9092] 23:16:33 policy-pap | check.crcs = true 23:16:33 policy-pap | client.dns.lookup = use_all_dns_ips 23:16:33 policy-pap | client.id = consumer-b06317e2-ac80-4179-891e-43beb77f3709-1 23:16:33 policy-pap | client.rack = 23:16:33 policy-pap | connections.max.idle.ms = 540000 23:16:33 policy-pap | default.api.timeout.ms = 60000 23:16:33 policy-pap | enable.auto.commit = true 23:16:33 policy-pap | exclude.internal.topics = true 23:16:33 policy-pap | fetch.max.bytes = 52428800 23:16:33 policy-pap | fetch.max.wait.ms = 500 23:16:33 policy-pap | fetch.min.bytes = 1 23:16:33 policy-pap | group.id = b06317e2-ac80-4179-891e-43beb77f3709 23:16:33 policy-pap | group.instance.id = null 23:16:33 policy-pap | heartbeat.interval.ms = 3000 23:16:33 policy-pap | interceptor.classes = [] 23:16:33 policy-pap | internal.leave.group.on.close = true 23:16:33 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 23:16:33 policy-pap | isolation.level = read_uncommitted 23:16:33 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:33 policy-pap | max.partition.fetch.bytes = 1048576 23:16:33 policy-pap | max.poll.interval.ms = 300000 23:16:33 policy-pap | max.poll.records = 500 23:16:33 policy-pap | metadata.max.age.ms = 300000 23:16:33 policy-pap | metric.reporters = [] 23:16:33 policy-pap | metrics.num.samples = 2 23:16:33 policy-pap | metrics.recording.level = INFO 23:16:33 policy-pap | metrics.sample.window.ms = 30000 23:16:33 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 23:16:33 policy-pap | receive.buffer.bytes = 65536 23:16:33 policy-pap | reconnect.backoff.max.ms = 1000 23:16:33 policy-pap | reconnect.backoff.ms = 50 23:16:33 policy-pap | request.timeout.ms = 30000 23:16:33 policy-pap | retry.backoff.ms = 100 23:16:33 policy-pap | sasl.client.callback.handler.class = null 23:16:33 policy-pap | sasl.jaas.config = null 23:16:33 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:33 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 23:16:33 policy-pap | sasl.kerberos.service.name = null 23:16:33 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:33 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:33 policy-pap | sasl.login.callback.handler.class = null 23:16:33 policy-pap | sasl.login.class = null 23:16:33 policy-pap | sasl.login.connect.timeout.ms = null 23:16:33 policy-pap | sasl.login.read.timeout.ms = null 23:16:33 policy-pap | sasl.login.refresh.buffer.seconds = 300 23:16:33 policy-pap | sasl.login.refresh.min.period.seconds = 60 23:16:33 policy-pap | sasl.login.refresh.window.factor = 0.8 23:16:33 policy-pap | sasl.login.refresh.window.jitter = 0.05 23:16:33 policy-pap | sasl.login.retry.backoff.max.ms = 10000 23:16:33 policy-pap | sasl.login.retry.backoff.ms = 100 23:16:33 policy-db-migrator | Waiting for mariadb port 3306... 23:16:33 policy-db-migrator | nc: connect to mariadb (172.17.0.4) port 3306 (tcp) failed: Connection refused 23:16:33 policy-db-migrator | nc: connect to mariadb (172.17.0.4) port 3306 (tcp) failed: Connection refused 23:16:33 policy-db-migrator | nc: connect to mariadb (172.17.0.4) port 3306 (tcp) failed: Connection refused 23:16:33 policy-db-migrator | nc: connect to mariadb (172.17.0.4) port 3306 (tcp) failed: Connection refused 23:16:33 policy-db-migrator | nc: connect to mariadb (172.17.0.4) port 3306 (tcp) failed: Connection refused 23:16:33 policy-db-migrator | nc: connect to mariadb (172.17.0.4) port 3306 (tcp) failed: Connection refused 23:16:33 policy-db-migrator | Connection to mariadb (172.17.0.4) 3306 port [tcp/mysql] succeeded! 23:16:33 policy-db-migrator | 321 blocks 23:16:33 policy-db-migrator | Preparing upgrade release version: 0800 23:16:33 policy-db-migrator | Preparing upgrade release version: 0900 23:16:33 policy-db-migrator | Preparing upgrade release version: 1000 23:16:33 policy-db-migrator | Preparing upgrade release version: 1100 23:16:33 policy-db-migrator | Preparing upgrade release version: 1200 23:16:33 policy-db-migrator | Preparing upgrade release version: 1300 23:16:33 policy-db-migrator | Done 23:16:33 policy-db-migrator | name version 23:16:33 policy-db-migrator | policyadmin 0 23:16:33 policy-db-migrator | policyadmin: upgrade available: 0 -> 1300 23:16:33 policy-db-migrator | upgrade: 0 -> 1300 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | > upgrade 0100-jpapdpgroup_properties.sql 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpgroup_properties (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PROPERTIES VARCHAR(255) DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) DEFAULT NULL) 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpstatistics_enginestats (AVERAGEEXECUTIONTIME DOUBLE DEFAULT NULL, ENGINEID VARCHAR(255) DEFAULT NULL, ENGINETIMESTAMP BIGINT DEFAULT NULL, ENGINEWORKERSTATE INT DEFAULT NULL, EVENTCOUNT BIGINT DEFAULT NULL, LASTENTERTIME BIGINT DEFAULT NULL, LASTEXECUTIONTIME BIGINT DEFAULT NULL, LASTSTART BIGINT DEFAULT NULL, UPTIME BIGINT DEFAULT NULL, timeStamp datetime DEFAULT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL) 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | > upgrade 0120-jpapdpsubgroup_policies.sql 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-pap | sasl.mechanism = GSSAPI 23:16:33 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 23:16:33 policy-pap | sasl.oauthbearer.expected.audience = null 23:16:33 policy-pap | sasl.oauthbearer.expected.issuer = null 23:16:33 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:33 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:33 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:33 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 23:16:33 policy-pap | sasl.oauthbearer.scope.claim.name = scope 23:16:33 policy-pap | sasl.oauthbearer.sub.claim.name = sub 23:16:33 policy-pap | sasl.oauthbearer.token.endpoint.url = null 23:16:33 policy-pap | security.protocol = PLAINTEXT 23:16:33 policy-pap | security.providers = null 23:16:33 policy-pap | send.buffer.bytes = 131072 23:16:33 policy-pap | session.timeout.ms = 45000 23:16:33 policy-pap | socket.connection.setup.timeout.max.ms = 30000 23:16:33 policy-pap | socket.connection.setup.timeout.ms = 10000 23:16:33 policy-pap | ssl.cipher.suites = null 23:16:33 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:33 policy-pap | ssl.endpoint.identification.algorithm = https 23:16:33 policy-pap | ssl.engine.factory.class = null 23:16:33 policy-pap | ssl.key.password = null 23:16:33 policy-pap | ssl.keymanager.algorithm = SunX509 23:16:33 policy-pap | ssl.keystore.certificate.chain = null 23:16:33 policy-pap | ssl.keystore.key = null 23:16:33 policy-pap | ssl.keystore.location = null 23:16:33 policy-pap | ssl.keystore.password = null 23:16:33 policy-pap | ssl.keystore.type = JKS 23:16:33 policy-pap | ssl.protocol = TLSv1.3 23:16:33 policy-pap | ssl.provider = null 23:16:33 policy-pap | ssl.secure.random.implementation = null 23:16:33 policy-pap | ssl.trustmanager.algorithm = PKIX 23:16:33 policy-pap | ssl.truststore.certificates = null 23:16:33 policy-pap | ssl.truststore.location = null 23:16:33 policy-pap | ssl.truststore.password = null 23:16:33 policy-pap | ssl.truststore.type = JKS 23:16:33 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:33 policy-pap | 23:16:33 policy-pap | [2024-03-01T23:14:33.600+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 23:16:33 policy-pap | [2024-03-01T23:14:33.600+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 23:16:33 policy-pap | [2024-03-01T23:14:33.600+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1709334873598 23:16:33 policy-pap | [2024-03-01T23:14:33.602+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-b06317e2-ac80-4179-891e-43beb77f3709-1, groupId=b06317e2-ac80-4179-891e-43beb77f3709] Subscribed to topic(s): policy-pdp-pap 23:16:33 policy-pap | [2024-03-01T23:14:33.603+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 23:16:33 policy-pap | allow.auto.create.topics = true 23:16:33 policy-pap | auto.commit.interval.ms = 5000 23:16:33 policy-pap | auto.include.jmx.reporter = true 23:16:33 policy-pap | auto.offset.reset = latest 23:16:33 policy-pap | bootstrap.servers = [kafka:9092] 23:16:33 policy-pap | check.crcs = true 23:16:33 policy-pap | client.dns.lookup = use_all_dns_ips 23:16:33 policy-pap | client.id = consumer-policy-pap-2 23:16:33 policy-pap | client.rack = 23:16:33 policy-pap | connections.max.idle.ms = 540000 23:16:33 policy-pap | default.api.timeout.ms = 60000 23:16:33 policy-pap | enable.auto.commit = true 23:16:33 policy-pap | exclude.internal.topics = true 23:16:33 policy-pap | fetch.max.bytes = 52428800 23:16:33 policy-pap | fetch.max.wait.ms = 500 23:16:33 policy-pap | fetch.min.bytes = 1 23:16:33 policy-pap | group.id = policy-pap 23:16:33 policy-pap | group.instance.id = null 23:16:33 policy-pap | heartbeat.interval.ms = 3000 23:16:33 policy-pap | interceptor.classes = [] 23:16:33 policy-pap | internal.leave.group.on.close = true 23:16:33 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 23:16:33 policy-pap | isolation.level = read_uncommitted 23:16:33 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:33 policy-pap | max.partition.fetch.bytes = 1048576 23:16:33 policy-pap | max.poll.interval.ms = 300000 23:16:33 policy-pap | max.poll.records = 500 23:16:33 policy-pap | metadata.max.age.ms = 300000 23:16:33 policy-pap | metric.reporters = [] 23:16:33 policy-pap | metrics.num.samples = 2 23:16:33 policy-pap | metrics.recording.level = INFO 23:16:33 policy-pap | metrics.sample.window.ms = 30000 23:16:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_policies (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL) 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | > upgrade 0130-jpapdpsubgroup_properties.sql 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_properties (parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL, PROPERTIES VARCHAR(255) DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) DEFAULT NULL) 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | > upgrade 0140-jpapdpsubgroup_supportedpolicytypes.sql 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapdpsubgroup_supportedpolicytypes (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, parentLocalName VARCHAR(120) DEFAULT NULL, localName VARCHAR(120) DEFAULT NULL, parentKeyVersion VARCHAR(15) DEFAULT NULL, parentKeyName VARCHAR(120) DEFAULT NULL) 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | > upgrade 0150-jpatoscacapabilityassignment_attributes.sql 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_attributes (name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, ATTRIBUTES LONGTEXT DEFAULT NULL, ATTRIBUTES_KEY VARCHAR(255) DEFAULT NULL) 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | > upgrade 0160-jpatoscacapabilityassignment_metadata.sql 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | > upgrade 0170-jpatoscacapabilityassignment_occurrences.sql 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_occurrences (name VARCHAR(120) NULL, version VARCHAR(20) NULL, OCCURRENCES INT DEFAULT NULL) 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | > upgrade 0180-jpatoscacapabilityassignment_properties.sql 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilityassignment_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | > upgrade 0190-jpatoscacapabilitytype_metadata.sql 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilitytype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | 23:16:33 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 23:16:33 policy-pap | receive.buffer.bytes = 65536 23:16:33 policy-pap | reconnect.backoff.max.ms = 1000 23:16:33 policy-pap | reconnect.backoff.ms = 50 23:16:33 policy-pap | request.timeout.ms = 30000 23:16:33 policy-pap | retry.backoff.ms = 100 23:16:33 policy-pap | sasl.client.callback.handler.class = null 23:16:33 policy-pap | sasl.jaas.config = null 23:16:33 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:33 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 23:16:33 policy-pap | sasl.kerberos.service.name = null 23:16:33 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:33 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:33 policy-pap | sasl.login.callback.handler.class = null 23:16:33 policy-pap | sasl.login.class = null 23:16:33 policy-pap | sasl.login.connect.timeout.ms = null 23:16:33 policy-pap | sasl.login.read.timeout.ms = null 23:16:33 policy-pap | sasl.login.refresh.buffer.seconds = 300 23:16:33 policy-pap | sasl.login.refresh.min.period.seconds = 60 23:16:33 policy-pap | sasl.login.refresh.window.factor = 0.8 23:16:33 policy-pap | sasl.login.refresh.window.jitter = 0.05 23:16:33 policy-pap | sasl.login.retry.backoff.max.ms = 10000 23:16:33 policy-pap | sasl.login.retry.backoff.ms = 100 23:16:33 policy-pap | sasl.mechanism = GSSAPI 23:16:33 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 23:16:33 policy-pap | sasl.oauthbearer.expected.audience = null 23:16:33 policy-pap | sasl.oauthbearer.expected.issuer = null 23:16:33 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:33 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:33 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:33 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 23:16:33 policy-pap | sasl.oauthbearer.scope.claim.name = scope 23:16:33 policy-pap | sasl.oauthbearer.sub.claim.name = sub 23:16:33 policy-pap | sasl.oauthbearer.token.endpoint.url = null 23:16:33 policy-pap | security.protocol = PLAINTEXT 23:16:33 policy-pap | security.providers = null 23:16:33 policy-pap | send.buffer.bytes = 131072 23:16:33 policy-pap | session.timeout.ms = 45000 23:16:33 policy-pap | socket.connection.setup.timeout.max.ms = 30000 23:16:33 policy-pap | socket.connection.setup.timeout.ms = 10000 23:16:33 policy-pap | ssl.cipher.suites = null 23:16:33 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:33 policy-pap | ssl.endpoint.identification.algorithm = https 23:16:33 policy-pap | ssl.engine.factory.class = null 23:16:33 policy-pap | ssl.key.password = null 23:16:33 policy-pap | ssl.keymanager.algorithm = SunX509 23:16:33 policy-pap | ssl.keystore.certificate.chain = null 23:16:33 policy-pap | ssl.keystore.key = null 23:16:33 policy-pap | ssl.keystore.location = null 23:16:33 policy-pap | ssl.keystore.password = null 23:16:33 policy-pap | ssl.keystore.type = JKS 23:16:33 policy-pap | ssl.protocol = TLSv1.3 23:16:33 policy-pap | ssl.provider = null 23:16:33 policy-pap | ssl.secure.random.implementation = null 23:16:33 policy-pap | ssl.trustmanager.algorithm = PKIX 23:16:33 policy-pap | ssl.truststore.certificates = null 23:16:33 policy-pap | ssl.truststore.location = null 23:16:33 policy-pap | ssl.truststore.password = null 23:16:33 policy-pap | ssl.truststore.type = JKS 23:16:33 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:33 policy-pap | 23:16:33 policy-pap | [2024-03-01T23:14:33.608+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 23:16:33 policy-pap | [2024-03-01T23:14:33.609+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 23:16:33 policy-pap | [2024-03-01T23:14:33.609+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1709334873608 23:16:33 policy-pap | [2024-03-01T23:14:33.609+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-policy-pap-2, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap 23:16:33 policy-apex-pdp | sasl.login.callback.handler.class = null 23:16:33 policy-apex-pdp | sasl.login.class = null 23:16:33 policy-apex-pdp | sasl.login.connect.timeout.ms = null 23:16:33 policy-apex-pdp | sasl.login.read.timeout.ms = null 23:16:33 policy-apex-pdp | sasl.login.refresh.buffer.seconds = 300 23:16:33 policy-apex-pdp | sasl.login.refresh.min.period.seconds = 60 23:16:33 policy-apex-pdp | sasl.login.refresh.window.factor = 0.8 23:16:33 policy-apex-pdp | sasl.login.refresh.window.jitter = 0.05 23:16:33 policy-apex-pdp | sasl.login.retry.backoff.max.ms = 10000 23:16:33 policy-apex-pdp | sasl.login.retry.backoff.ms = 100 23:16:33 policy-apex-pdp | sasl.mechanism = GSSAPI 23:16:33 policy-apex-pdp | sasl.oauthbearer.clock.skew.seconds = 30 23:16:33 policy-apex-pdp | sasl.oauthbearer.expected.audience = null 23:16:33 policy-apex-pdp | sasl.oauthbearer.expected.issuer = null 23:16:33 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:33 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:33 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:33 policy-apex-pdp | sasl.oauthbearer.jwks.endpoint.url = null 23:16:33 policy-apex-pdp | sasl.oauthbearer.scope.claim.name = scope 23:16:33 policy-apex-pdp | sasl.oauthbearer.sub.claim.name = sub 23:16:33 policy-apex-pdp | sasl.oauthbearer.token.endpoint.url = null 23:16:33 policy-pap | [2024-03-01T23:14:33.956+00:00|INFO|PapDatabaseInitializer|main] Created initial pdpGroup in DB - PdpGroups(groups=[PdpGroup(name=defaultGroup, description=The default group that registers all supported policy types and pdps., pdpGroupState=ACTIVE, properties=null, pdpSubgroups=[PdpSubGroup(pdpType=apex, supportedPolicyTypes=[onap.policies.controlloop.operational.common.Apex 1.0.0, onap.policies.native.Apex 1.0.0], policies=[], currentInstanceCount=0, desiredInstanceCount=1, properties=null, pdpInstances=null)])]) from /opt/app/policy/pap/etc/mounted/groups.json 23:16:33 policy-pap | [2024-03-01T23:14:34.124+00:00|WARN|JpaBaseConfiguration$JpaWebConfiguration|main] spring.jpa.open-in-view is enabled by default. Therefore, database queries may be performed during view rendering. Explicitly configure spring.jpa.open-in-view to disable this warning 23:16:33 policy-pap | [2024-03-01T23:14:34.371+00:00|INFO|DefaultSecurityFilterChain|main] Will secure any request with [org.springframework.security.web.session.DisableEncodeUrlFilter@50f13494, org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter@73c09a98, org.springframework.security.web.context.SecurityContextHolderFilter@17e6d07b, org.springframework.security.web.header.HeaderWriterFilter@16361e61, org.springframework.security.web.authentication.logout.LogoutFilter@dcdb883, org.springframework.security.web.authentication.www.BasicAuthenticationFilter@55cb3b7, org.springframework.security.web.savedrequest.RequestCacheAwareFilter@53564a4c, org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter@69a294d8, org.springframework.security.web.authentication.AnonymousAuthenticationFilter@6f2bf657, org.springframework.security.web.access.ExceptionTranslationFilter@399fd710, org.springframework.security.web.access.intercept.AuthorizationFilter@68de8522] 23:16:33 policy-pap | [2024-03-01T23:14:35.271+00:00|INFO|EndpointLinksResolver|main] Exposing 3 endpoint(s) beneath base path '' 23:16:33 policy-pap | [2024-03-01T23:14:35.409+00:00|INFO|Http11NioProtocol|main] Starting ProtocolHandler ["http-nio-6969"] 23:16:33 policy-pap | [2024-03-01T23:14:35.442+00:00|INFO|TomcatWebServer|main] Tomcat started on port(s): 6969 (http) with context path '/policy/pap/v1' 23:16:33 policy-pap | [2024-03-01T23:14:35.460+00:00|INFO|ServiceManager|main] Policy PAP starting 23:16:33 policy-pap | [2024-03-01T23:14:35.461+00:00|INFO|ServiceManager|main] Policy PAP starting Meter Registry 23:16:33 policy-pap | [2024-03-01T23:14:35.461+00:00|INFO|ServiceManager|main] Policy PAP starting PAP parameters 23:16:33 policy-pap | [2024-03-01T23:14:35.463+00:00|INFO|ServiceManager|main] Policy PAP starting Pdp Heartbeat Listener 23:16:33 policy-pap | [2024-03-01T23:14:35.463+00:00|INFO|ServiceManager|main] Policy PAP starting Response Request ID Dispatcher 23:16:33 policy-pap | [2024-03-01T23:14:35.463+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Request ID Dispatcher 23:16:33 policy-pap | [2024-03-01T23:14:35.463+00:00|INFO|ServiceManager|main] Policy PAP starting Response Message Dispatcher 23:16:33 policy-pap | [2024-03-01T23:14:35.467+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=b06317e2-ac80-4179-891e-43beb77f3709, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@47f5ab58 23:16:33 policy-pap | [2024-03-01T23:14:35.479+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=b06317e2-ac80-4179-891e-43beb77f3709, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 23:16:33 policy-pap | [2024-03-01T23:14:35.480+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 23:16:33 policy-pap | allow.auto.create.topics = true 23:16:33 policy-pap | auto.commit.interval.ms = 5000 23:16:33 policy-pap | auto.include.jmx.reporter = true 23:16:33 policy-pap | auto.offset.reset = latest 23:16:33 policy-pap | bootstrap.servers = [kafka:9092] 23:16:33 policy-pap | check.crcs = true 23:16:33 policy-pap | client.dns.lookup = use_all_dns_ips 23:16:33 policy-pap | client.id = consumer-b06317e2-ac80-4179-891e-43beb77f3709-3 23:16:33 policy-pap | client.rack = 23:16:33 policy-pap | connections.max.idle.ms = 540000 23:16:33 policy-pap | default.api.timeout.ms = 60000 23:16:33 policy-pap | enable.auto.commit = true 23:16:33 policy-pap | exclude.internal.topics = true 23:16:33 policy-pap | fetch.max.bytes = 52428800 23:16:33 policy-pap | fetch.max.wait.ms = 500 23:16:33 policy-pap | fetch.min.bytes = 1 23:16:33 policy-pap | group.id = b06317e2-ac80-4179-891e-43beb77f3709 23:16:33 policy-pap | group.instance.id = null 23:16:33 policy-pap | heartbeat.interval.ms = 3000 23:16:33 policy-pap | interceptor.classes = [] 23:16:33 policy-pap | internal.leave.group.on.close = true 23:16:33 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 23:16:33 policy-pap | isolation.level = read_uncommitted 23:16:33 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:33 policy-pap | max.partition.fetch.bytes = 1048576 23:16:33 policy-pap | max.poll.interval.ms = 300000 23:16:33 policy-pap | max.poll.records = 500 23:16:33 policy-pap | metadata.max.age.ms = 300000 23:16:33 policy-pap | metric.reporters = [] 23:16:33 policy-pap | metrics.num.samples = 2 23:16:33 policy-pap | metrics.recording.level = INFO 23:16:33 policy-pap | metrics.sample.window.ms = 30000 23:16:33 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 23:16:33 policy-pap | receive.buffer.bytes = 65536 23:16:33 policy-pap | reconnect.backoff.max.ms = 1000 23:16:33 policy-pap | reconnect.backoff.ms = 50 23:16:33 policy-pap | request.timeout.ms = 30000 23:16:33 policy-pap | retry.backoff.ms = 100 23:16:33 policy-pap | sasl.client.callback.handler.class = null 23:16:33 policy-pap | sasl.jaas.config = null 23:16:33 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:33 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 23:16:33 policy-pap | sasl.kerberos.service.name = null 23:16:33 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:33 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:33 policy-pap | sasl.login.callback.handler.class = null 23:16:33 policy-pap | sasl.login.class = null 23:16:33 policy-pap | sasl.login.connect.timeout.ms = null 23:16:33 policy-pap | sasl.login.read.timeout.ms = null 23:16:33 policy-pap | sasl.login.refresh.buffer.seconds = 300 23:16:33 policy-pap | sasl.login.refresh.min.period.seconds = 60 23:16:33 policy-pap | sasl.login.refresh.window.factor = 0.8 23:16:33 policy-pap | sasl.login.refresh.window.jitter = 0.05 23:16:33 policy-pap | sasl.login.retry.backoff.max.ms = 10000 23:16:33 policy-pap | sasl.login.retry.backoff.ms = 100 23:16:33 policy-pap | sasl.mechanism = GSSAPI 23:16:33 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 23:16:33 policy-pap | sasl.oauthbearer.expected.audience = null 23:16:33 policy-pap | sasl.oauthbearer.expected.issuer = null 23:16:33 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:33 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:33 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:33 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 23:16:33 policy-pap | sasl.oauthbearer.scope.claim.name = scope 23:16:33 policy-pap | sasl.oauthbearer.sub.claim.name = sub 23:16:33 policy-pap | sasl.oauthbearer.token.endpoint.url = null 23:16:33 policy-pap | security.protocol = PLAINTEXT 23:16:33 policy-pap | security.providers = null 23:16:33 policy-pap | send.buffer.bytes = 131072 23:16:33 policy-pap | session.timeout.ms = 45000 23:16:33 policy-pap | socket.connection.setup.timeout.max.ms = 30000 23:16:33 policy-pap | socket.connection.setup.timeout.ms = 10000 23:16:33 policy-pap | ssl.cipher.suites = null 23:16:33 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:33 policy-pap | ssl.endpoint.identification.algorithm = https 23:16:33 policy-pap | ssl.engine.factory.class = null 23:16:33 policy-pap | ssl.key.password = null 23:16:33 policy-pap | ssl.keymanager.algorithm = SunX509 23:16:33 policy-pap | ssl.keystore.certificate.chain = null 23:16:33 policy-pap | ssl.keystore.key = null 23:16:33 policy-pap | ssl.keystore.location = null 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.701752575Z level=info msg="Executing migration" id="create permission table" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.702554218Z level=info msg="Migration successfully executed" id="create permission table" duration=801.273µs 23:16:33 mariadb | 2024-03-01 23:14:03+00:00 [Note] [Entrypoint]: /usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/db.sh 23:16:33 mariadb | #!/bin/bash -xv 23:16:33 mariadb | # Copyright 2019,2021 AT&T Intellectual Property. All rights reserved 23:16:33 mariadb | # Modifications Copyright (c) 2022 Nordix Foundation. 23:16:33 mariadb | # 23:16:33 mariadb | # Licensed under the Apache License, Version 2.0 (the "License"); 23:16:33 mariadb | # you may not use this file except in compliance with the License. 23:16:33 mariadb | # You may obtain a copy of the License at 23:16:33 mariadb | # 23:16:33 mariadb | # http://www.apache.org/licenses/LICENSE-2.0 23:16:33 mariadb | # 23:16:33 mariadb | # Unless required by applicable law or agreed to in writing, software 23:16:33 mariadb | # distributed under the License is distributed on an "AS IS" BASIS, 23:16:33 mariadb | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 23:16:33 mariadb | # See the License for the specific language governing permissions and 23:16:33 mariadb | # limitations under the License. 23:16:33 mariadb | 23:16:33 mariadb | for db in migration pooling policyadmin operationshistory clampacm policyclamp 23:16:33 mariadb | do 23:16:33 mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "CREATE DATABASE IF NOT EXISTS ${db};" 23:16:33 mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "GRANT ALL PRIVILEGES ON \`${db}\`.* TO '${MYSQL_USER}'@'%' ;" 23:16:33 mariadb | done 23:16:33 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 23:16:33 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS migration;' 23:16:33 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `migration`.* TO '\''policy_user'\''@'\''%'\'' ;' 23:16:33 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 23:16:33 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS pooling;' 23:16:33 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `pooling`.* TO '\''policy_user'\''@'\''%'\'' ;' 23:16:33 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 23:16:33 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS policyadmin;' 23:16:33 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `policyadmin`.* TO '\''policy_user'\''@'\''%'\'' ;' 23:16:33 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 23:16:33 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS operationshistory;' 23:16:33 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `operationshistory`.* TO '\''policy_user'\''@'\''%'\'' ;' 23:16:33 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 23:16:33 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS clampacm;' 23:16:33 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `clampacm`.* TO '\''policy_user'\''@'\''%'\'' ;' 23:16:33 mariadb | + for db in migration pooling policyadmin operationshistory clampacm policyclamp 23:16:33 mariadb | + mysql -uroot -psecret --execute 'CREATE DATABASE IF NOT EXISTS policyclamp;' 23:16:33 mariadb | + mysql -uroot -psecret --execute 'GRANT ALL PRIVILEGES ON `policyclamp`.* TO '\''policy_user'\''@'\''%'\'' ;' 23:16:33 mariadb | 23:16:33 policy-pap | ssl.keystore.password = null 23:16:33 policy-pap | ssl.keystore.type = JKS 23:16:33 policy-pap | ssl.protocol = TLSv1.3 23:16:33 policy-pap | ssl.provider = null 23:16:33 policy-pap | ssl.secure.random.implementation = null 23:16:33 policy-pap | ssl.trustmanager.algorithm = PKIX 23:16:33 policy-pap | ssl.truststore.certificates = null 23:16:33 policy-pap | ssl.truststore.location = null 23:16:33 policy-pap | ssl.truststore.password = null 23:16:33 policy-pap | ssl.truststore.type = JKS 23:16:33 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:33 policy-pap | 23:16:33 policy-pap | [2024-03-01T23:14:35.486+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 23:16:33 policy-pap | [2024-03-01T23:14:35.486+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 23:16:33 policy-pap | [2024-03-01T23:14:35.486+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1709334875486 23:16:33 policy-pap | [2024-03-01T23:14:35.486+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-b06317e2-ac80-4179-891e-43beb77f3709-3, groupId=b06317e2-ac80-4179-891e-43beb77f3709] Subscribed to topic(s): policy-pdp-pap 23:16:33 policy-pap | [2024-03-01T23:14:35.487+00:00|INFO|ServiceManager|main] Policy PAP starting Heartbeat Message Dispatcher 23:16:33 policy-pap | [2024-03-01T23:14:35.487+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=68dd5619-1612-461c-9253-6bb52b88e744, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@3435a4e5 23:16:33 policy-pap | [2024-03-01T23:14:35.487+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=68dd5619-1612-461c-9253-6bb52b88e744, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=false, locked=false, uebThread=null, topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 23:16:33 policy-pap | [2024-03-01T23:14:35.487+00:00|INFO|ConsumerConfig|main] ConsumerConfig values: 23:16:33 policy-pap | allow.auto.create.topics = true 23:16:33 policy-pap | auto.commit.interval.ms = 5000 23:16:33 policy-pap | auto.include.jmx.reporter = true 23:16:33 policy-pap | auto.offset.reset = latest 23:16:33 policy-pap | bootstrap.servers = [kafka:9092] 23:16:33 policy-pap | check.crcs = true 23:16:33 policy-pap | client.dns.lookup = use_all_dns_ips 23:16:33 policy-pap | client.id = consumer-policy-pap-4 23:16:33 policy-pap | client.rack = 23:16:33 policy-pap | connections.max.idle.ms = 540000 23:16:33 policy-pap | default.api.timeout.ms = 60000 23:16:33 policy-pap | enable.auto.commit = true 23:16:33 policy-pap | exclude.internal.topics = true 23:16:33 policy-pap | fetch.max.bytes = 52428800 23:16:33 policy-pap | fetch.max.wait.ms = 500 23:16:33 policy-pap | fetch.min.bytes = 1 23:16:33 policy-pap | group.id = policy-pap 23:16:33 policy-pap | group.instance.id = null 23:16:33 policy-pap | heartbeat.interval.ms = 3000 23:16:33 policy-pap | interceptor.classes = [] 23:16:33 policy-pap | internal.leave.group.on.close = true 23:16:33 policy-pap | internal.throw.on.fetch.stable.offset.unsupported = false 23:16:33 policy-pap | isolation.level = read_uncommitted 23:16:33 policy-pap | key.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:33 policy-pap | max.partition.fetch.bytes = 1048576 23:16:33 policy-pap | max.poll.interval.ms = 300000 23:16:33 policy-pap | max.poll.records = 500 23:16:33 policy-pap | metadata.max.age.ms = 300000 23:16:33 policy-pap | metric.reporters = [] 23:16:33 policy-pap | metrics.num.samples = 2 23:16:33 policy-pap | metrics.recording.level = INFO 23:16:33 policy-pap | metrics.sample.window.ms = 30000 23:16:33 policy-pap | partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor] 23:16:33 policy-pap | receive.buffer.bytes = 65536 23:16:33 policy-pap | reconnect.backoff.max.ms = 1000 23:16:33 policy-pap | reconnect.backoff.ms = 50 23:16:33 policy-pap | request.timeout.ms = 30000 23:16:33 policy-pap | retry.backoff.ms = 100 23:16:33 policy-pap | sasl.client.callback.handler.class = null 23:16:33 policy-pap | sasl.jaas.config = null 23:16:33 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:33 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 23:16:33 policy-pap | sasl.kerberos.service.name = null 23:16:33 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:33 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:33 policy-pap | sasl.login.callback.handler.class = null 23:16:33 policy-pap | sasl.login.class = null 23:16:33 policy-pap | sasl.login.connect.timeout.ms = null 23:16:33 policy-pap | sasl.login.read.timeout.ms = null 23:16:33 policy-pap | sasl.login.refresh.buffer.seconds = 300 23:16:33 policy-pap | sasl.login.refresh.min.period.seconds = 60 23:16:33 policy-pap | sasl.login.refresh.window.factor = 0.8 23:16:33 policy-pap | sasl.login.refresh.window.jitter = 0.05 23:16:33 policy-pap | sasl.login.retry.backoff.max.ms = 10000 23:16:33 mariadb | mysql -uroot -p"${MYSQL_ROOT_PASSWORD}" --execute "FLUSH PRIVILEGES;" 23:16:33 mariadb | + mysql -uroot -psecret --execute 'FLUSH PRIVILEGES;' 23:16:33 mariadb | mysql -u"${MYSQL_USER}" -p"${MYSQL_PASSWORD}" -f policyclamp < /tmp/policy-clamp-create-tables.sql 23:16:33 mariadb | + mysql -upolicy_user -ppolicy_user -f policyclamp 23:16:33 mariadb | 23:16:33 mariadb | 2024-03-01 23:14:04+00:00 [Note] [Entrypoint]: Stopping temporary server 23:16:33 mariadb | 2024-03-01 23:14:04 0 [Note] mariadbd (initiated by: unknown): Normal shutdown 23:16:33 mariadb | 2024-03-01 23:14:04 0 [Note] InnoDB: FTS optimize thread exiting. 23:16:33 mariadb | 2024-03-01 23:14:04 0 [Note] InnoDB: Starting shutdown... 23:16:33 mariadb | 2024-03-01 23:14:04 0 [Note] InnoDB: Dumping buffer pool(s) to /var/lib/mysql/ib_buffer_pool 23:16:33 mariadb | 2024-03-01 23:14:04 0 [Note] InnoDB: Buffer pool(s) dump completed at 240301 23:14:04 23:16:33 mariadb | 2024-03-01 23:14:04 0 [Note] InnoDB: Removed temporary tablespace data file: "./ibtmp1" 23:16:33 mariadb | 2024-03-01 23:14:04 0 [Note] InnoDB: Shutdown completed; log sequence number 332713; transaction id 298 23:16:33 mariadb | 2024-03-01 23:14:04 0 [Note] mariadbd: Shutdown complete 23:16:33 mariadb | 23:16:33 mariadb | 2024-03-01 23:14:04+00:00 [Note] [Entrypoint]: Temporary server stopped 23:16:33 mariadb | 23:16:33 mariadb | 2024-03-01 23:14:04+00:00 [Note] [Entrypoint]: MariaDB init process done. Ready for start up. 23:16:33 mariadb | 23:16:33 mariadb | 2024-03-01 23:14:04 0 [Note] mariadbd (server 10.10.2-MariaDB-1:10.10.2+maria~ubu2204) starting as process 1 ... 23:16:33 mariadb | 2024-03-01 23:14:04 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 23:16:33 mariadb | 2024-03-01 23:14:04 0 [Note] InnoDB: Number of transaction pools: 1 23:16:33 mariadb | 2024-03-01 23:14:04 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions 23:16:33 mariadb | 2024-03-01 23:14:04 0 [Note] mariadbd: O_TMPFILE is not supported on /tmp (disabling future attempts) 23:16:33 mariadb | 2024-03-01 23:14:04 0 [Warning] mariadbd: io_uring_queue_init() failed with ENOSYS: check seccomp filters, and the kernel version (newer than 5.1 required) 23:16:33 mariadb | 2024-03-01 23:14:04 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF 23:16:33 mariadb | 2024-03-01 23:14:04 0 [Note] InnoDB: Initializing buffer pool, total size = 128.000MiB, chunk size = 2.000MiB 23:16:33 mariadb | 2024-03-01 23:14:04 0 [Note] InnoDB: Completed initialization of buffer pool 23:16:33 mariadb | 2024-03-01 23:14:04 0 [Note] InnoDB: File system buffers for log disabled (block size=512 bytes) 23:16:33 mariadb | 2024-03-01 23:14:04 0 [Note] InnoDB: 128 rollback segments are active. 23:16:33 mariadb | 2024-03-01 23:14:04 0 [Note] InnoDB: Setting file './ibtmp1' size to 12.000MiB. Physically writing the file full; Please wait ... 23:16:33 mariadb | 2024-03-01 23:14:04 0 [Note] InnoDB: File './ibtmp1' size is now 12.000MiB. 23:16:33 mariadb | 2024-03-01 23:14:04 0 [Note] InnoDB: log sequence number 332713; transaction id 299 23:16:33 mariadb | 2024-03-01 23:14:04 0 [Note] InnoDB: Loading buffer pool(s) from /var/lib/mysql/ib_buffer_pool 23:16:33 mariadb | 2024-03-01 23:14:04 0 [Note] Plugin 'FEEDBACK' is disabled. 23:16:33 mariadb | 2024-03-01 23:14:04 0 [Warning] 'default-authentication-plugin' is MySQL 5.6 / 5.7 compatible option. To be implemented in later versions. 23:16:33 mariadb | 2024-03-01 23:14:04 0 [Warning] You need to use --log-bin to make --expire-logs-days or --binlog-expire-logs-seconds work. 23:16:33 mariadb | 2024-03-01 23:14:04 0 [Note] Server socket created on IP: '0.0.0.0'. 23:16:33 mariadb | 2024-03-01 23:14:04 0 [Note] Server socket created on IP: '::'. 23:16:33 mariadb | 2024-03-01 23:14:04 0 [Note] mariadbd: ready for connections. 23:16:33 mariadb | Version: '10.10.2-MariaDB-1:10.10.2+maria~ubu2204' socket: '/run/mysqld/mysqld.sock' port: 3306 mariadb.org binary distribution 23:16:33 mariadb | 2024-03-01 23:14:04 0 [Note] InnoDB: Buffer pool(s) load completed at 240301 23:14:04 23:16:33 mariadb | 2024-03-01 23:14:04 3 [Warning] Aborted connection 3 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.10' (This connection closed normally without authentication) 23:16:33 mariadb | 2024-03-01 23:14:04 4 [Warning] Aborted connection 4 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.11' (This connection closed normally without authentication) 23:16:33 mariadb | 2024-03-01 23:14:05 5 [Warning] Aborted connection 5 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.8' (This connection closed normally without authentication) 23:16:33 mariadb | 2024-03-01 23:14:05 7 [Warning] Aborted connection 7 to db: 'unconnected' user: 'unauthenticated' host: '172.17.0.9' (This connection closed normally without authentication) 23:16:33 policy-apex-pdp | security.protocol = PLAINTEXT 23:16:33 policy-apex-pdp | security.providers = null 23:16:33 policy-apex-pdp | send.buffer.bytes = 131072 23:16:33 policy-apex-pdp | socket.connection.setup.timeout.max.ms = 30000 23:16:33 policy-apex-pdp | socket.connection.setup.timeout.ms = 10000 23:16:33 policy-apex-pdp | ssl.cipher.suites = null 23:16:33 policy-apex-pdp | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:33 policy-apex-pdp | ssl.endpoint.identification.algorithm = https 23:16:33 policy-apex-pdp | ssl.engine.factory.class = null 23:16:33 policy-apex-pdp | ssl.key.password = null 23:16:33 policy-apex-pdp | ssl.keymanager.algorithm = SunX509 23:16:33 policy-apex-pdp | ssl.keystore.certificate.chain = null 23:16:33 policy-apex-pdp | ssl.keystore.key = null 23:16:33 policy-apex-pdp | ssl.keystore.location = null 23:16:33 policy-apex-pdp | ssl.keystore.password = null 23:16:33 policy-apex-pdp | ssl.keystore.type = JKS 23:16:33 policy-apex-pdp | ssl.protocol = TLSv1.3 23:16:33 policy-apex-pdp | ssl.provider = null 23:16:33 policy-apex-pdp | ssl.secure.random.implementation = null 23:16:33 policy-apex-pdp | ssl.trustmanager.algorithm = PKIX 23:16:33 policy-apex-pdp | ssl.truststore.certificates = null 23:16:33 policy-apex-pdp | ssl.truststore.location = null 23:16:33 policy-apex-pdp | ssl.truststore.password = null 23:16:33 policy-apex-pdp | ssl.truststore.type = JKS 23:16:33 policy-apex-pdp | transaction.timeout.ms = 60000 23:16:33 policy-apex-pdp | transactional.id = null 23:16:33 policy-apex-pdp | value.serializer = class org.apache.kafka.common.serialization.StringSerializer 23:16:33 policy-apex-pdp | 23:16:33 policy-apex-pdp | [2024-03-01T23:14:37.463+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. 23:16:33 policy-apex-pdp | [2024-03-01T23:14:37.478+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 23:16:33 policy-apex-pdp | [2024-03-01T23:14:37.478+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 23:16:33 policy-apex-pdp | [2024-03-01T23:14:37.478+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1709334877478 23:16:33 policy-apex-pdp | [2024-03-01T23:14:37.478+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=3ddc54c6-966d-49c4-8dc0-dcc2b9920774, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created 23:16:33 policy-apex-pdp | [2024-03-01T23:14:37.478+00:00|INFO|ServiceManager|main] service manager starting set alive 23:16:33 policy-apex-pdp | [2024-03-01T23:14:37.479+00:00|INFO|ServiceManager|main] service manager starting register pdp status context object 23:16:33 policy-apex-pdp | [2024-03-01T23:14:37.481+00:00|INFO|ServiceManager|main] service manager starting topic sinks 23:16:33 policy-apex-pdp | [2024-03-01T23:14:37.481+00:00|INFO|ServiceManager|main] service manager starting Pdp Status publisher 23:16:33 policy-apex-pdp | [2024-03-01T23:14:37.483+00:00|INFO|ServiceManager|main] service manager starting Register pdp update listener 23:16:33 policy-apex-pdp | [2024-03-01T23:14:37.483+00:00|INFO|ServiceManager|main] service manager starting Register pdp state change request dispatcher 23:16:33 policy-apex-pdp | [2024-03-01T23:14:37.483+00:00|INFO|ServiceManager|main] service manager starting Message Dispatcher 23:16:33 policy-apex-pdp | [2024-03-01T23:14:37.483+00:00|INFO|TopicBase|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=d5634529-e7dd-41ae-91a6-87fa8cb77024, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=0, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=0]]]]: registering org.onap.policy.common.endpoints.listeners.MessageTypeDispatcher@e077866 23:16:33 policy-apex-pdp | [2024-03-01T23:14:37.484+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=d5634529-e7dd-41ae-91a6-87fa8cb77024, consumerInstance=policy-apex-pdp, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: register: start not attempted 23:16:33 policy-apex-pdp | [2024-03-01T23:14:37.484+00:00|INFO|ServiceManager|main] service manager starting Create REST server 23:16:33 policy-apex-pdp | [2024-03-01T23:14:37.500+00:00|INFO|OrderedServiceImpl|Timer-0] ***** OrderedServiceImpl implementers: 23:16:33 policy-apex-pdp | [] 23:16:33 policy-apex-pdp | [2024-03-01T23:14:37.502+00:00|INFO|network|Timer-0] [OUT|KAFKA|policy-pdp-pap] 23:16:33 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"42c6b7e5-6c9a-4a53-977a-d8beb2627a2f","timestampMs":1709334877484,"name":"apex-fdc6d847-0af6-485e-9b08-c41bc13abd4f","pdpGroup":"defaultGroup"} 23:16:33 policy-apex-pdp | [2024-03-01T23:14:37.672+00:00|INFO|ServiceManager|main] service manager starting Rest Server 23:16:33 policy-apex-pdp | [2024-03-01T23:14:37.676+00:00|INFO|ServiceManager|main] service manager starting 23:16:33 policy-apex-pdp | [2024-03-01T23:14:37.676+00:00|INFO|ServiceManager|main] service manager starting REST RestServerParameters 23:16:33 policy-apex-pdp | [2024-03-01T23:14:37.676+00:00|INFO|JettyServletServer|main] JettyJerseyServer [Jerseyservlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-3591009c==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@634b550e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-64c2b546==org.glassfish.jersey.servlet.ServletContainer@2d1bc350{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=Server@5ebd56e9{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@63f34b70{/,null,STOPPED}, connector=RestServerParameters@5d25e6bb{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=null, servlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-3591009c==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@634b550e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-64c2b546==org.glassfish.jersey.servlet.ServletContainer@2d1bc350{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:16:33 policy-apex-pdp | [2024-03-01T23:14:37.691+00:00|INFO|ServiceManager|main] service manager started 23:16:33 policy-apex-pdp | [2024-03-01T23:14:37.691+00:00|INFO|ServiceManager|main] service manager started 23:16:33 policy-apex-pdp | [2024-03-01T23:14:37.692+00:00|INFO|ApexStarterMain|main] Started policy-apex-pdp service successfully. 23:16:33 policy-pap | sasl.login.retry.backoff.ms = 100 23:16:33 policy-pap | sasl.mechanism = GSSAPI 23:16:33 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 23:16:33 policy-pap | sasl.oauthbearer.expected.audience = null 23:16:33 policy-pap | sasl.oauthbearer.expected.issuer = null 23:16:33 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:33 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:33 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:33 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 23:16:33 policy-pap | sasl.oauthbearer.scope.claim.name = scope 23:16:33 policy-pap | sasl.oauthbearer.sub.claim.name = sub 23:16:33 policy-pap | sasl.oauthbearer.token.endpoint.url = null 23:16:33 policy-pap | security.protocol = PLAINTEXT 23:16:33 policy-pap | security.providers = null 23:16:33 policy-pap | send.buffer.bytes = 131072 23:16:33 policy-pap | session.timeout.ms = 45000 23:16:33 policy-pap | socket.connection.setup.timeout.max.ms = 30000 23:16:33 policy-pap | socket.connection.setup.timeout.ms = 10000 23:16:33 policy-pap | ssl.cipher.suites = null 23:16:33 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:33 policy-pap | ssl.endpoint.identification.algorithm = https 23:16:33 policy-pap | ssl.engine.factory.class = null 23:16:33 policy-pap | ssl.key.password = null 23:16:33 policy-pap | ssl.keymanager.algorithm = SunX509 23:16:33 policy-pap | ssl.keystore.certificate.chain = null 23:16:33 policy-pap | ssl.keystore.key = null 23:16:33 policy-pap | ssl.keystore.location = null 23:16:33 policy-pap | ssl.keystore.password = null 23:16:33 policy-pap | ssl.keystore.type = JKS 23:16:33 policy-pap | ssl.protocol = TLSv1.3 23:16:33 policy-pap | ssl.provider = null 23:16:33 policy-pap | ssl.secure.random.implementation = null 23:16:33 policy-pap | ssl.trustmanager.algorithm = PKIX 23:16:33 policy-pap | ssl.truststore.certificates = null 23:16:33 policy-pap | ssl.truststore.location = null 23:16:33 policy-pap | ssl.truststore.password = null 23:16:33 policy-pap | ssl.truststore.type = JKS 23:16:33 policy-pap | value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer 23:16:33 policy-pap | 23:16:33 policy-pap | [2024-03-01T23:14:35.491+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 23:16:33 policy-pap | [2024-03-01T23:14:35.491+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 23:16:33 policy-pap | [2024-03-01T23:14:35.492+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1709334875491 23:16:33 policy-pap | [2024-03-01T23:14:35.492+00:00|INFO|KafkaConsumer|main] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Subscribed to topic(s): policy-pdp-pap 23:16:33 policy-pap | [2024-03-01T23:14:35.492+00:00|INFO|ServiceManager|main] Policy PAP starting topics 23:16:33 policy-db-migrator | > upgrade 0200-jpatoscacapabilitytype_properties.sql 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscacapabilitytype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | > upgrade 0210-jpatoscadatatype_constraints.sql 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_constraints (name VARCHAR(120) NULL, version VARCHAR(20) NULL, CONSTRAINTS VARCHAR(255) NULL) 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | > upgrade 0220-jpatoscadatatype_metadata.sql 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | > upgrade 0230-jpatoscadatatype_properties.sql 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscadatatype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | > upgrade 0240-jpatoscanodetemplate_metadata.sql 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetemplate_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | > upgrade 0250-jpatoscanodetemplate_properties.sql 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetemplate_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | > upgrade 0260-jpatoscanodetype_metadata.sql 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | > upgrade 0270-jpatoscanodetype_properties.sql 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscanodetype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | > upgrade 0280-jpatoscapolicy_metadata.sql 23:16:33 kafka | max.incremental.fetch.session.cache.slots = 1000 23:16:33 kafka | message.max.bytes = 1048588 23:16:33 kafka | metadata.log.dir = null 23:16:33 kafka | metadata.log.max.record.bytes.between.snapshots = 20971520 23:16:33 kafka | metadata.log.max.snapshot.interval.ms = 3600000 23:16:33 kafka | metadata.log.segment.bytes = 1073741824 23:16:33 kafka | metadata.log.segment.min.bytes = 8388608 23:16:33 kafka | metadata.log.segment.ms = 604800000 23:16:33 kafka | metadata.max.idle.interval.ms = 500 23:16:33 kafka | metadata.max.retention.bytes = 104857600 23:16:33 kafka | metadata.max.retention.ms = 604800000 23:16:33 kafka | metric.reporters = [] 23:16:33 kafka | metrics.num.samples = 2 23:16:33 kafka | metrics.recording.level = INFO 23:16:33 kafka | metrics.sample.window.ms = 30000 23:16:33 kafka | min.insync.replicas = 1 23:16:33 kafka | node.id = 1 23:16:33 kafka | num.io.threads = 8 23:16:33 kafka | num.network.threads = 3 23:16:33 policy-apex-pdp | [2024-03-01T23:14:37.691+00:00|INFO|JettyServletServer|RestServerParameters-6969] JettyJerseyServer [Jerseyservlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-3591009c==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@634b550e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-64c2b546==org.glassfish.jersey.servlet.ServletContainer@2d1bc350{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=swagger-6969, toString()=JettyServletServer(name=RestServerParameters, host=0.0.0.0, port=6969, sniHostCheck=false, user=policyadmin, password=zb!XztG34, contextPath=/, jettyServer=Server@5ebd56e9{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@63f34b70{/,null,STOPPED}, connector=RestServerParameters@5d25e6bb{HTTP/1.1, (http/1.1)}{0.0.0.0:6969}, jettyThread=Thread[RestServerParameters-6969,5,main], servlets={/metrics=io.prometheus.client.servlet.jakarta.exporter.MetricsServlet-3591009c==io.prometheus.client.servlet.jakarta.exporter.MetricsServlet@634b550e{jsp=null,order=-1,inst=false,async=true,src=EMBEDDED:null,STOPPED}, /*=org.glassfish.jersey.servlet.ServletContainer-64c2b546==org.glassfish.jersey.servlet.ServletContainer@2d1bc350{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:16:33 policy-apex-pdp | [2024-03-01T23:14:37.838+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: M5MPGqipTkazoD2lpH5myg 23:16:33 policy-apex-pdp | [2024-03-01T23:14:37.838+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-d5634529-e7dd-41ae-91a6-87fa8cb77024-2, groupId=d5634529-e7dd-41ae-91a6-87fa8cb77024] Cluster ID: M5MPGqipTkazoD2lpH5myg 23:16:33 policy-apex-pdp | [2024-03-01T23:14:37.839+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 2 with epoch 0 23:16:33 policy-apex-pdp | [2024-03-01T23:14:37.839+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-d5634529-e7dd-41ae-91a6-87fa8cb77024-2, groupId=d5634529-e7dd-41ae-91a6-87fa8cb77024] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) 23:16:33 policy-apex-pdp | [2024-03-01T23:14:37.846+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-d5634529-e7dd-41ae-91a6-87fa8cb77024-2, groupId=d5634529-e7dd-41ae-91a6-87fa8cb77024] (Re-)joining group 23:16:33 policy-apex-pdp | [2024-03-01T23:14:37.858+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-d5634529-e7dd-41ae-91a6-87fa8cb77024-2, groupId=d5634529-e7dd-41ae-91a6-87fa8cb77024] Request joining group due to: need to re-join with the given member-id: consumer-d5634529-e7dd-41ae-91a6-87fa8cb77024-2-ce322a6e-d562-4bb5-a0de-80fe00c55a56 23:16:33 policy-apex-pdp | [2024-03-01T23:14:37.859+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-d5634529-e7dd-41ae-91a6-87fa8cb77024-2, groupId=d5634529-e7dd-41ae-91a6-87fa8cb77024] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) 23:16:33 policy-apex-pdp | [2024-03-01T23:14:37.859+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-d5634529-e7dd-41ae-91a6-87fa8cb77024-2, groupId=d5634529-e7dd-41ae-91a6-87fa8cb77024] (Re-)joining group 23:16:33 policy-apex-pdp | [2024-03-01T23:14:38.347+00:00|INFO|GsonMessageBodyHandler|RestServerParameters-6969] Using GSON for REST calls 23:16:33 policy-apex-pdp | [2024-03-01T23:14:38.348+00:00|INFO|YamlMessageBodyHandler|RestServerParameters-6969] Accepting YAML for REST calls 23:16:33 policy-apex-pdp | [2024-03-01T23:14:40.866+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-d5634529-e7dd-41ae-91a6-87fa8cb77024-2, groupId=d5634529-e7dd-41ae-91a6-87fa8cb77024] Successfully joined group with generation Generation{generationId=1, memberId='consumer-d5634529-e7dd-41ae-91a6-87fa8cb77024-2-ce322a6e-d562-4bb5-a0de-80fe00c55a56', protocol='range'} 23:16:33 policy-apex-pdp | [2024-03-01T23:14:40.875+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-d5634529-e7dd-41ae-91a6-87fa8cb77024-2, groupId=d5634529-e7dd-41ae-91a6-87fa8cb77024] Finished assignment for group at generation 1: {consumer-d5634529-e7dd-41ae-91a6-87fa8cb77024-2-ce322a6e-d562-4bb5-a0de-80fe00c55a56=Assignment(partitions=[policy-pdp-pap-0])} 23:16:33 policy-apex-pdp | [2024-03-01T23:14:40.885+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-d5634529-e7dd-41ae-91a6-87fa8cb77024-2, groupId=d5634529-e7dd-41ae-91a6-87fa8cb77024] Successfully synced group in generation Generation{generationId=1, memberId='consumer-d5634529-e7dd-41ae-91a6-87fa8cb77024-2-ce322a6e-d562-4bb5-a0de-80fe00c55a56', protocol='range'} 23:16:33 policy-apex-pdp | [2024-03-01T23:14:40.885+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-d5634529-e7dd-41ae-91a6-87fa8cb77024-2, groupId=d5634529-e7dd-41ae-91a6-87fa8cb77024] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) 23:16:33 policy-apex-pdp | [2024-03-01T23:14:40.887+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-d5634529-e7dd-41ae-91a6-87fa8cb77024-2, groupId=d5634529-e7dd-41ae-91a6-87fa8cb77024] Adding newly assigned partitions: policy-pdp-pap-0 23:16:33 policy-apex-pdp | [2024-03-01T23:14:40.894+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-d5634529-e7dd-41ae-91a6-87fa8cb77024-2, groupId=d5634529-e7dd-41ae-91a6-87fa8cb77024] Found no committed offset for partition policy-pdp-pap-0 23:16:33 policy-apex-pdp | [2024-03-01T23:14:40.905+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-d5634529-e7dd-41ae-91a6-87fa8cb77024-2, groupId=d5634529-e7dd-41ae-91a6-87fa8cb77024] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. 23:16:33 policy-apex-pdp | [2024-03-01T23:14:56.151+00:00|INFO|RequestLog|qtp1068445309-33] 172.17.0.3 - policyadmin [01/Mar/2024:23:14:56 +0000] "GET /metrics HTTP/1.1" 200 10650 "-" "Prometheus/2.50.1" 23:16:33 policy-apex-pdp | [2024-03-01T23:14:57.483+00:00|INFO|network|Timer-0] [OUT|KAFKA|policy-pdp-pap] 23:16:33 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"8df0cb48-9297-480d-ad60-c4ceb603ea4a","timestampMs":1709334897483,"name":"apex-fdc6d847-0af6-485e-9b08-c41bc13abd4f","pdpGroup":"defaultGroup"} 23:16:33 policy-apex-pdp | [2024-03-01T23:14:57.513+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:33 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"8df0cb48-9297-480d-ad60-c4ceb603ea4a","timestampMs":1709334897483,"name":"apex-fdc6d847-0af6-485e-9b08-c41bc13abd4f","pdpGroup":"defaultGroup"} 23:16:33 policy-apex-pdp | [2024-03-01T23:14:57.515+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 23:16:33 policy-apex-pdp | [2024-03-01T23:14:57.675+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:33 policy-apex-pdp | {"source":"pap-b0e01990-d32f-4dcf-a2d1-6c45f49736ab","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"82cc1e45-516f-4977-9810-91f911494c61","timestampMs":1709334897617,"name":"apex-fdc6d847-0af6-485e-9b08-c41bc13abd4f","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:33 policy-apex-pdp | [2024-03-01T23:14:57.683+00:00|WARN|Registry|KAFKA-source-policy-pdp-pap] replacing previously registered: object:pdp/status/publisher 23:16:33 policy-apex-pdp | [2024-03-01T23:14:57.683+00:00|INFO|network|Timer-1] [OUT|KAFKA|policy-pdp-pap] 23:16:33 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"dca10b46-9c4d-48cd-9c8f-991d2f0a4c73","timestampMs":1709334897683,"name":"apex-fdc6d847-0af6-485e-9b08-c41bc13abd4f","pdpGroup":"defaultGroup"} 23:16:33 policy-apex-pdp | [2024-03-01T23:14:57.684+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] 23:16:33 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"82cc1e45-516f-4977-9810-91f911494c61","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"c311dd0a-1819-449e-9e7e-2b2e2fc4e4eb","timestampMs":1709334897684,"name":"apex-fdc6d847-0af6-485e-9b08-c41bc13abd4f","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:33 policy-apex-pdp | [2024-03-01T23:14:57.696+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:33 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"dca10b46-9c4d-48cd-9c8f-991d2f0a4c73","timestampMs":1709334897683,"name":"apex-fdc6d847-0af6-485e-9b08-c41bc13abd4f","pdpGroup":"defaultGroup"} 23:16:33 policy-apex-pdp | [2024-03-01T23:14:57.696+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 23:16:33 policy-apex-pdp | [2024-03-01T23:14:57.700+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:33 policy-apex-pdp | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"82cc1e45-516f-4977-9810-91f911494c61","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"c311dd0a-1819-449e-9e7e-2b2e2fc4e4eb","timestampMs":1709334897684,"name":"apex-fdc6d847-0af6-485e-9b08-c41bc13abd4f","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:33 policy-apex-pdp | [2024-03-01T23:14:57.701+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 23:16:33 policy-apex-pdp | [2024-03-01T23:14:57.718+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:33 policy-apex-pdp | {"source":"pap-b0e01990-d32f-4dcf-a2d1-6c45f49736ab","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"1f5da1e6-b10a-4031-a568-7543e4e07892","timestampMs":1709334897618,"name":"apex-fdc6d847-0af6-485e-9b08-c41bc13abd4f","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:33 policy-apex-pdp | [2024-03-01T23:14:57.720+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] 23:16:33 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"1f5da1e6-b10a-4031-a568-7543e4e07892","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"896ca6e5-fcb7-4e4a-b493-bdc4aa434dbf","timestampMs":1709334897720,"name":"apex-fdc6d847-0af6-485e-9b08-c41bc13abd4f","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:33 policy-apex-pdp | [2024-03-01T23:14:57.730+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:33 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"1f5da1e6-b10a-4031-a568-7543e4e07892","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"896ca6e5-fcb7-4e4a-b493-bdc4aa434dbf","timestampMs":1709334897720,"name":"apex-fdc6d847-0af6-485e-9b08-c41bc13abd4f","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:33 policy-apex-pdp | [2024-03-01T23:14:57.731+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 23:16:33 policy-apex-pdp | [2024-03-01T23:14:57.772+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:33 policy-apex-pdp | {"source":"pap-b0e01990-d32f-4dcf-a2d1-6c45f49736ab","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"3aba36f4-3a85-44dd-b131-b77a9b4bfbb5","timestampMs":1709334897738,"name":"apex-fdc6d847-0af6-485e-9b08-c41bc13abd4f","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:33 policy-apex-pdp | [2024-03-01T23:14:57.775+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [OUT|KAFKA|policy-pdp-pap] 23:16:33 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"3aba36f4-3a85-44dd-b131-b77a9b4bfbb5","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"bd50a662-68c8-4d5c-b69b-8f62e11e8a9f","timestampMs":1709334897774,"name":"apex-fdc6d847-0af6-485e-9b08-c41bc13abd4f","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:33 policy-apex-pdp | [2024-03-01T23:14:57.781+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:33 policy-apex-pdp | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"3aba36f4-3a85-44dd-b131-b77a9b4bfbb5","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"bd50a662-68c8-4d5c-b69b-8f62e11e8a9f","timestampMs":1709334897774,"name":"apex-fdc6d847-0af6-485e-9b08-c41bc13abd4f","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:33 policy-apex-pdp | [2024-03-01T23:14:57.781+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATUS 23:16:33 policy-apex-pdp | [2024-03-01T23:15:56.079+00:00|INFO|RequestLog|qtp1068445309-28] 172.17.0.3 - policyadmin [01/Mar/2024:23:15:56 +0000] "GET /metrics HTTP/1.1" 200 10651 "-" "Prometheus/2.50.1" 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | > upgrade 0290-jpatoscapolicy_properties.sql 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | > upgrade 0300-jpatoscapolicy_targets.sql 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicy_targets (name VARCHAR(120) NULL, version VARCHAR(20) NULL) 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | > upgrade 0310-jpatoscapolicytype_metadata.sql 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | > upgrade 0320-jpatoscapolicytype_properties.sql 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | > upgrade 0330-jpatoscapolicytype_targets.sql 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_targets (name VARCHAR(120) NULL, version VARCHAR(20) NULL) 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | > upgrade 0340-jpatoscapolicytype_triggers.sql 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscapolicytype_triggers (name VARCHAR(120) NULL, version VARCHAR(20) NULL, TRIGGERS VARCHAR(255) NULL) 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | > upgrade 0350-jpatoscaproperty_constraints.sql 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaproperty_constraints (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, CONSTRAINTS VARCHAR(255) NULL) 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | > upgrade 0360-jpatoscaproperty_metadata.sql 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaproperty_metadata (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | > upgrade 0370-jpatoscarelationshiptype_metadata.sql 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarelationshiptype_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | > upgrade 0380-jpatoscarelationshiptype_properties.sql 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarelationshiptype_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGBLOB DEFAULT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | > upgrade 0390-jpatoscarequirement_metadata.sql 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | > upgrade 0400-jpatoscarequirement_occurrences.sql 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_occurrences (name VARCHAR(120) NULL, version VARCHAR(20) NULL, OCCURRENCES INT DEFAULT NULL) 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | > upgrade 0410-jpatoscarequirement_properties.sql 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscarequirement_properties (name VARCHAR(120) NULL, version VARCHAR(20) NULL, PROPERTIES LONGTEXT NULL, PROPERTIES_KEY VARCHAR(255) NULL) 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | > upgrade 0420-jpatoscaservicetemplate_metadata.sql 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscaservicetemplate_metadata (name VARCHAR(120) NULL, version VARCHAR(20) NULL, METADATA VARCHAR(255) NULL, METADATA_KEY VARCHAR(255) NULL) 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | > upgrade 0430-jpatoscatopologytemplate_inputs.sql 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpatoscatopologytemplate_inputs (parentLocalName VARCHAR(120) NULL, localName VARCHAR(120) NULL, parentKeyVersion VARCHAR(15) NULL, parentKeyName VARCHAR(120) NULL, INPUTS LONGBLOB DEFAULT NULL, INPUTS_KEY VARCHAR(255) NULL) 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | > upgrade 0440-pdpgroup_pdpsubgroup.sql 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpgroup_pdpsubgroup (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPGROUP_PDPSUBGROUP (name, version, parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | > upgrade 0450-pdpgroup.sql 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpgroup (`DESCRIPTION` VARCHAR(255) NULL, PDPGROUPSTATE INT DEFAULT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_PDPGROUP (name, version)) 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.708783243Z level=info msg="Executing migration" id="add unique index permission.role_id" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.71045409Z level=info msg="Migration successfully executed" id="add unique index permission.role_id" duration=1.670377ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.715039458Z level=info msg="Executing migration" id="add unique index role_id_action_scope" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.716796605Z level=info msg="Migration successfully executed" id="add unique index role_id_action_scope" duration=1.751467ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.722299637Z level=info msg="Executing migration" id="create role table" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.72307036Z level=info msg="Migration successfully executed" id="create role table" duration=770.243µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.727567359Z level=info msg="Executing migration" id="add column display_name" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.734668957Z level=info msg="Migration successfully executed" id="add column display_name" duration=7.101318ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.739275955Z level=info msg="Executing migration" id="add column group_name" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.746550535Z level=info msg="Migration successfully executed" id="add column group_name" duration=7.27397ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.751562504Z level=info msg="Executing migration" id="add index role.org_id" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.752257738Z level=info msg="Migration successfully executed" id="add index role.org_id" duration=695.034µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.756559255Z level=info msg="Executing migration" id="add unique index role_org_id_name" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.758259022Z level=info msg="Migration successfully executed" id="add unique index role_org_id_name" duration=1.734248ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.76286841Z level=info msg="Executing migration" id="add index role_org_id_uid" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.764052065Z level=info msg="Migration successfully executed" id="add index role_org_id_uid" duration=1.184685ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.768477472Z level=info msg="Executing migration" id="create team role table" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.769204206Z level=info msg="Migration successfully executed" id="create team role table" duration=727.794µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.774511687Z level=info msg="Executing migration" id="add index team_role.org_id" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.775560541Z level=info msg="Migration successfully executed" id="add index team_role.org_id" duration=1.048414ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.779749418Z level=info msg="Executing migration" id="add unique index team_role_org_id_team_id_role_id" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.781525805Z level=info msg="Migration successfully executed" id="add unique index team_role_org_id_team_id_role_id" duration=1.775607ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.787733429Z level=info msg="Executing migration" id="add index team_role.team_id" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.788755064Z level=info msg="Migration successfully executed" id="add index team_role.team_id" duration=1.020925ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.793198672Z level=info msg="Executing migration" id="create user role table" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.793919765Z level=info msg="Migration successfully executed" id="create user role table" duration=720.763µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.798272702Z level=info msg="Executing migration" id="add index user_role.org_id" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.799284966Z level=info msg="Migration successfully executed" id="add index user_role.org_id" duration=1.012234ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.804798688Z level=info msg="Executing migration" id="add unique index user_role_org_id_user_id_role_id" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.806503405Z level=info msg="Migration successfully executed" id="add unique index user_role_org_id_user_id_role_id" duration=1.703927ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.810477671Z level=info msg="Executing migration" id="add index user_role.user_id" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.811760836Z level=info msg="Migration successfully executed" id="add index user_role.user_id" duration=1.284835ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.817421659Z level=info msg="Executing migration" id="create builtin role table" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.818154062Z level=info msg="Migration successfully executed" id="create builtin role table" duration=731.783µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.825629252Z level=info msg="Executing migration" id="add index builtin_role.role_id" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.826689356Z level=info msg="Migration successfully executed" id="add index builtin_role.role_id" duration=1.065164ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.832023418Z level=info msg="Executing migration" id="add index builtin_role.name" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.833817825Z level=info msg="Migration successfully executed" id="add index builtin_role.name" duration=1.794327ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.83778108Z level=info msg="Executing migration" id="Add column org_id to builtin_role table" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.847367209Z level=info msg="Migration successfully executed" id="Add column org_id to builtin_role table" duration=9.591279ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.854920899Z level=info msg="Executing migration" id="add index builtin_role.org_id" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.856109915Z level=info msg="Migration successfully executed" id="add index builtin_role.org_id" duration=1.183216ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.859384567Z level=info msg="Executing migration" id="add unique index builtin_role_org_id_role_id_role" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.860443231Z level=info msg="Migration successfully executed" id="add unique index builtin_role_org_id_role_id_role" duration=1.058304ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.864119777Z level=info msg="Executing migration" id="Remove unique index role_org_id_uid" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.865146401Z level=info msg="Migration successfully executed" id="Remove unique index role_org_id_uid" duration=1.026614ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.870045491Z level=info msg="Executing migration" id="add unique index role.uid" 23:16:33 prometheus | ts=2024-03-01T23:13:54.080Z caller=main.go:564 level=info msg="No time or size retention was set so using the default time retention" duration=15d 23:16:33 prometheus | ts=2024-03-01T23:13:54.080Z caller=main.go:608 level=info msg="Starting Prometheus Server" mode=server version="(version=2.50.1, branch=HEAD, revision=8c9b0285360a0b6288d76214a75ce3025bce4050)" 23:16:33 prometheus | ts=2024-03-01T23:13:54.080Z caller=main.go:613 level=info build_context="(go=go1.21.7, platform=linux/amd64, user=root@6213bb3ee580, date=20240226-11:36:26, tags=netgo,builtinassets,stringlabels)" 23:16:33 prometheus | ts=2024-03-01T23:13:54.080Z caller=main.go:614 level=info host_details="(Linux 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 prometheus (none))" 23:16:33 prometheus | ts=2024-03-01T23:13:54.080Z caller=main.go:615 level=info fd_limits="(soft=1048576, hard=1048576)" 23:16:33 prometheus | ts=2024-03-01T23:13:54.080Z caller=main.go:616 level=info vm_limits="(soft=unlimited, hard=unlimited)" 23:16:33 prometheus | ts=2024-03-01T23:13:54.082Z caller=web.go:565 level=info component=web msg="Start listening for connections" address=0.0.0.0:9090 23:16:33 prometheus | ts=2024-03-01T23:13:54.083Z caller=main.go:1118 level=info msg="Starting TSDB ..." 23:16:33 prometheus | ts=2024-03-01T23:13:54.085Z caller=tls_config.go:313 level=info component=web msg="Listening on" address=[::]:9090 23:16:33 prometheus | ts=2024-03-01T23:13:54.085Z caller=tls_config.go:316 level=info component=web msg="TLS is disabled." http2=false address=[::]:9090 23:16:33 prometheus | ts=2024-03-01T23:13:54.089Z caller=head.go:610 level=info component=tsdb msg="Replaying on-disk memory mappable chunks if any" 23:16:33 prometheus | ts=2024-03-01T23:13:54.089Z caller=head.go:692 level=info component=tsdb msg="On-disk memory mappable chunks replay completed" duration=2.321µs 23:16:33 prometheus | ts=2024-03-01T23:13:54.089Z caller=head.go:700 level=info component=tsdb msg="Replaying WAL, this may take a while" 23:16:33 prometheus | ts=2024-03-01T23:13:54.090Z caller=head.go:771 level=info component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0 23:16:33 prometheus | ts=2024-03-01T23:13:54.090Z caller=head.go:808 level=info component=tsdb msg="WAL replay completed" checkpoint_replay_duration=72.88µs wal_replay_duration=392.781µs wbl_replay_duration=200ns total_replay_duration=497.042µs 23:16:33 prometheus | ts=2024-03-01T23:13:54.092Z caller=main.go:1139 level=info fs_type=EXT4_SUPER_MAGIC 23:16:33 prometheus | ts=2024-03-01T23:13:54.092Z caller=main.go:1142 level=info msg="TSDB started" 23:16:33 prometheus | ts=2024-03-01T23:13:54.092Z caller=main.go:1324 level=info msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml 23:16:33 prometheus | ts=2024-03-01T23:13:54.093Z caller=main.go:1361 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=1.041865ms db_storage=1.84µs remote_storage=2.08µs web_handler=680ns query_engine=940ns scrape=268.371µs scrape_sd=152.881µs notify=31.43µs notify_sd=10.65µs rules=1.7µs tracing=6.1µs 23:16:33 prometheus | ts=2024-03-01T23:13:54.093Z caller=main.go:1103 level=info msg="Server is ready to receive web requests." 23:16:33 prometheus | ts=2024-03-01T23:13:54.093Z caller=manager.go:146 level=info component="rule manager" msg="Starting rule manager..." 23:16:33 policy-pap | [2024-03-01T23:14:35.492+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=policy-pap, consumerInstance=68dd5619-1612-461c-9253-6bb52b88e744, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-heartbeat,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-heartbeat, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 23:16:33 policy-pap | [2024-03-01T23:14:35.492+00:00|INFO|SingleThreadedBusTopicSource|main] SingleThreadedKafkaTopicSource [getTopicCommInfrastructure()=KAFKA, toString()=SingleThreadedBusTopicSource [consumerGroup=b06317e2-ac80-4179-891e-43beb77f3709, consumerInstance=policy-pap, fetchTimeout=15000, fetchLimit=-1, consumer=KafkaConsumerWrapper [fetchTimeout=15000], alive=true, locked=false, uebThread=Thread[KAFKA-source-policy-pdp-pap,5,main], topicListeners=1, toString()=BusTopicBase [apiKey=null, apiSecret=null, useHttps=false, allowSelfSignedCerts=false, toString()=TopicBase [servers=[kafka:9092], topic=policy-pdp-pap, effectiveTopic=policy-pdp-pap, #recentEvents=0, locked=false, #topicListeners=1]]]]: starting 23:16:33 policy-pap | [2024-03-01T23:14:35.493+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=b0aacfbd-0e2f-4d79-9605-b4e7efdadb74, alive=false, publisher=null]]: starting 23:16:33 policy-pap | [2024-03-01T23:14:35.511+00:00|INFO|ProducerConfig|main] ProducerConfig values: 23:16:33 policy-pap | acks = -1 23:16:33 policy-pap | auto.include.jmx.reporter = true 23:16:33 policy-pap | batch.size = 16384 23:16:33 policy-pap | bootstrap.servers = [kafka:9092] 23:16:33 policy-pap | buffer.memory = 33554432 23:16:33 policy-pap | client.dns.lookup = use_all_dns_ips 23:16:33 policy-pap | client.id = producer-1 23:16:33 policy-pap | compression.type = none 23:16:33 policy-pap | connections.max.idle.ms = 540000 23:16:33 policy-pap | delivery.timeout.ms = 120000 23:16:33 policy-pap | enable.idempotence = true 23:16:33 policy-pap | interceptor.classes = [] 23:16:33 policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer 23:16:33 policy-pap | linger.ms = 0 23:16:33 policy-pap | max.block.ms = 60000 23:16:33 policy-pap | max.in.flight.requests.per.connection = 5 23:16:33 policy-pap | max.request.size = 1048576 23:16:33 policy-pap | metadata.max.age.ms = 300000 23:16:33 policy-pap | metadata.max.idle.ms = 300000 23:16:33 policy-pap | metric.reporters = [] 23:16:33 policy-pap | metrics.num.samples = 2 23:16:33 policy-pap | metrics.recording.level = INFO 23:16:33 policy-pap | metrics.sample.window.ms = 30000 23:16:33 policy-pap | partitioner.adaptive.partitioning.enable = true 23:16:33 policy-pap | partitioner.availability.timeout.ms = 0 23:16:33 policy-pap | partitioner.class = null 23:16:33 policy-pap | partitioner.ignore.keys = false 23:16:33 policy-pap | receive.buffer.bytes = 32768 23:16:33 policy-pap | reconnect.backoff.max.ms = 1000 23:16:33 policy-pap | reconnect.backoff.ms = 50 23:16:33 policy-pap | request.timeout.ms = 30000 23:16:33 policy-pap | retries = 2147483647 23:16:33 policy-pap | retry.backoff.ms = 100 23:16:33 policy-pap | sasl.client.callback.handler.class = null 23:16:33 policy-pap | sasl.jaas.config = null 23:16:33 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:33 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 23:16:33 policy-pap | sasl.kerberos.service.name = null 23:16:33 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:33 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:33 policy-pap | sasl.login.callback.handler.class = null 23:16:33 policy-pap | sasl.login.class = null 23:16:33 policy-pap | sasl.login.connect.timeout.ms = null 23:16:33 policy-pap | sasl.login.read.timeout.ms = null 23:16:33 policy-pap | sasl.login.refresh.buffer.seconds = 300 23:16:33 policy-pap | sasl.login.refresh.min.period.seconds = 60 23:16:33 policy-pap | sasl.login.refresh.window.factor = 0.8 23:16:33 policy-pap | sasl.login.refresh.window.jitter = 0.05 23:16:33 policy-pap | sasl.login.retry.backoff.max.ms = 10000 23:16:33 policy-pap | sasl.login.retry.backoff.ms = 100 23:16:33 policy-pap | sasl.mechanism = GSSAPI 23:16:33 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 23:16:33 policy-pap | sasl.oauthbearer.expected.audience = null 23:16:33 policy-pap | sasl.oauthbearer.expected.issuer = null 23:16:33 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:33 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:33 kafka | num.partitions = 1 23:16:33 kafka | num.recovery.threads.per.data.dir = 1 23:16:33 kafka | num.replica.alter.log.dirs.threads = null 23:16:33 kafka | num.replica.fetchers = 1 23:16:33 kafka | offset.metadata.max.bytes = 4096 23:16:33 kafka | offsets.commit.required.acks = -1 23:16:33 kafka | offsets.commit.timeout.ms = 5000 23:16:33 kafka | offsets.load.buffer.size = 5242880 23:16:33 kafka | offsets.retention.check.interval.ms = 600000 23:16:33 kafka | offsets.retention.minutes = 10080 23:16:33 kafka | offsets.topic.compression.codec = 0 23:16:33 kafka | offsets.topic.num.partitions = 50 23:16:33 kafka | offsets.topic.replication.factor = 1 23:16:33 kafka | offsets.topic.segment.bytes = 104857600 23:16:33 kafka | password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding 23:16:33 kafka | password.encoder.iterations = 4096 23:16:33 kafka | password.encoder.key.length = 128 23:16:33 kafka | password.encoder.keyfactory.algorithm = null 23:16:33 kafka | password.encoder.old.secret = null 23:16:33 kafka | password.encoder.secret = null 23:16:33 kafka | principal.builder.class = class org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder 23:16:33 kafka | process.roles = [] 23:16:33 kafka | producer.id.expiration.check.interval.ms = 600000 23:16:33 kafka | producer.id.expiration.ms = 86400000 23:16:33 kafka | producer.purgatory.purge.interval.requests = 1000 23:16:33 kafka | queued.max.request.bytes = -1 23:16:33 kafka | queued.max.requests = 500 23:16:33 kafka | quota.window.num = 11 23:16:33 kafka | quota.window.size.seconds = 1 23:16:33 kafka | remote.log.index.file.cache.total.size.bytes = 1073741824 23:16:33 kafka | remote.log.manager.task.interval.ms = 30000 23:16:33 kafka | remote.log.manager.task.retry.backoff.max.ms = 30000 23:16:33 kafka | remote.log.manager.task.retry.backoff.ms = 500 23:16:33 kafka | remote.log.manager.task.retry.jitter = 0.2 23:16:33 kafka | remote.log.manager.thread.pool.size = 10 23:16:33 kafka | remote.log.metadata.custom.metadata.max.bytes = 128 23:16:33 kafka | remote.log.metadata.manager.class.name = org.apache.kafka.server.log.remote.metadata.storage.TopicBasedRemoteLogMetadataManager 23:16:33 kafka | remote.log.metadata.manager.class.path = null 23:16:33 kafka | remote.log.metadata.manager.impl.prefix = rlmm.config. 23:16:33 kafka | remote.log.metadata.manager.listener.name = null 23:16:33 kafka | remote.log.reader.max.pending.tasks = 100 23:16:33 kafka | remote.log.reader.threads = 10 23:16:33 kafka | remote.log.storage.manager.class.name = null 23:16:33 kafka | remote.log.storage.manager.class.path = null 23:16:33 kafka | remote.log.storage.manager.impl.prefix = rsm.config. 23:16:33 kafka | remote.log.storage.system.enable = false 23:16:33 kafka | replica.fetch.backoff.ms = 1000 23:16:33 kafka | replica.fetch.max.bytes = 1048576 23:16:33 kafka | replica.fetch.min.bytes = 1 23:16:33 kafka | replica.fetch.response.max.bytes = 10485760 23:16:33 kafka | replica.fetch.wait.max.ms = 500 23:16:33 kafka | replica.high.watermark.checkpoint.interval.ms = 5000 23:16:33 kafka | replica.lag.time.max.ms = 30000 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.871297595Z level=info msg="Migration successfully executed" id="add unique index role.uid" duration=1.252145ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.875331952Z level=info msg="Executing migration" id="create seed assignment table" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.876026414Z level=info msg="Migration successfully executed" id="create seed assignment table" duration=694.332µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.879195117Z level=info msg="Executing migration" id="add unique index builtin_role_role_name" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.880233081Z level=info msg="Migration successfully executed" id="add unique index builtin_role_role_name" duration=1.034924ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.884709839Z level=info msg="Executing migration" id="add column hidden to role table" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.892611411Z level=info msg="Migration successfully executed" id="add column hidden to role table" duration=7.896682ms 23:16:33 kafka | replica.selector.class = null 23:16:33 kafka | replica.socket.receive.buffer.bytes = 65536 23:16:33 kafka | replica.socket.timeout.ms = 30000 23:16:33 kafka | replication.quota.window.num = 11 23:16:33 kafka | replication.quota.window.size.seconds = 1 23:16:33 kafka | request.timeout.ms = 30000 23:16:33 kafka | reserved.broker.max.id = 1000 23:16:33 kafka | sasl.client.callback.handler.class = null 23:16:33 kafka | sasl.enabled.mechanisms = [GSSAPI] 23:16:33 kafka | sasl.jaas.config = null 23:16:33 kafka | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:33 kafka | sasl.kerberos.min.time.before.relogin = 60000 23:16:33 kafka | sasl.kerberos.principal.to.local.rules = [DEFAULT] 23:16:33 kafka | sasl.kerberos.service.name = null 23:16:33 kafka | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:33 kafka | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:33 kafka | sasl.login.callback.handler.class = null 23:16:33 kafka | sasl.login.class = null 23:16:33 kafka | sasl.login.connect.timeout.ms = null 23:16:33 kafka | sasl.login.read.timeout.ms = null 23:16:33 kafka | sasl.login.refresh.buffer.seconds = 300 23:16:33 kafka | sasl.login.refresh.min.period.seconds = 60 23:16:33 kafka | sasl.login.refresh.window.factor = 0.8 23:16:33 kafka | sasl.login.refresh.window.jitter = 0.05 23:16:33 kafka | sasl.login.retry.backoff.max.ms = 10000 23:16:33 kafka | sasl.login.retry.backoff.ms = 100 23:16:33 kafka | sasl.mechanism.controller.protocol = GSSAPI 23:16:33 kafka | sasl.mechanism.inter.broker.protocol = GSSAPI 23:16:33 kafka | sasl.oauthbearer.clock.skew.seconds = 30 23:16:33 kafka | sasl.oauthbearer.expected.audience = null 23:16:33 kafka | sasl.oauthbearer.expected.issuer = null 23:16:33 kafka | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:33 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:33 kafka | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:33 kafka | sasl.oauthbearer.jwks.endpoint.url = null 23:16:33 kafka | sasl.oauthbearer.scope.claim.name = scope 23:16:33 kafka | sasl.oauthbearer.sub.claim.name = sub 23:16:33 kafka | sasl.oauthbearer.token.endpoint.url = null 23:16:33 kafka | sasl.server.callback.handler.class = null 23:16:33 kafka | sasl.server.max.receive.size = 524288 23:16:33 kafka | security.inter.broker.protocol = PLAINTEXT 23:16:33 kafka | security.providers = null 23:16:33 kafka | server.max.startup.time.ms = 9223372036854775807 23:16:33 kafka | socket.connection.setup.timeout.max.ms = 30000 23:16:33 kafka | socket.connection.setup.timeout.ms = 10000 23:16:33 kafka | socket.listen.backlog.size = 50 23:16:33 kafka | socket.receive.buffer.bytes = 102400 23:16:33 kafka | socket.request.max.bytes = 104857600 23:16:33 kafka | socket.send.buffer.bytes = 102400 23:16:33 kafka | ssl.cipher.suites = [] 23:16:33 kafka | ssl.client.auth = none 23:16:33 kafka | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:33 kafka | ssl.endpoint.identification.algorithm = https 23:16:33 kafka | ssl.engine.factory.class = null 23:16:33 kafka | ssl.key.password = null 23:16:33 kafka | ssl.keymanager.algorithm = SunX509 23:16:33 kafka | ssl.keystore.certificate.chain = null 23:16:33 kafka | ssl.keystore.key = null 23:16:33 kafka | ssl.keystore.location = null 23:16:33 kafka | ssl.keystore.password = null 23:16:33 kafka | ssl.keystore.type = JKS 23:16:33 kafka | ssl.principal.mapping.rules = DEFAULT 23:16:33 kafka | ssl.protocol = TLSv1.3 23:16:33 kafka | ssl.provider = null 23:16:33 kafka | ssl.secure.random.implementation = null 23:16:33 kafka | ssl.trustmanager.algorithm = PKIX 23:16:33 kafka | ssl.truststore.certificates = null 23:16:33 kafka | ssl.truststore.location = null 23:16:33 kafka | ssl.truststore.password = null 23:16:33 kafka | ssl.truststore.type = JKS 23:16:33 kafka | transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000 23:16:33 kafka | transaction.max.timeout.ms = 900000 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.896638137Z level=info msg="Executing migration" id="permission kind migration" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.90485495Z level=info msg="Migration successfully executed" id="permission kind migration" duration=8.216813ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.908614105Z level=info msg="Executing migration" id="permission attribute migration" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.916522787Z level=info msg="Migration successfully executed" id="permission attribute migration" duration=7.908102ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.920014341Z level=info msg="Executing migration" id="permission identifier migration" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.927732822Z level=info msg="Migration successfully executed" id="permission identifier migration" duration=7.71812ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.932694961Z level=info msg="Executing migration" id="add permission identifier index" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.933713625Z level=info msg="Migration successfully executed" id="add permission identifier index" duration=1.050154ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.938833476Z level=info msg="Executing migration" id="create query_history table v1" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.939890281Z level=info msg="Migration successfully executed" id="create query_history table v1" duration=1.056425ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.945043862Z level=info msg="Executing migration" id="add index query_history.org_id-created_by-datasource_uid" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.946239386Z level=info msg="Migration successfully executed" id="add index query_history.org_id-created_by-datasource_uid" duration=1.223525ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.950345172Z level=info msg="Executing migration" id="alter table query_history alter column created_by type to bigint" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.950617094Z level=info msg="Migration successfully executed" id="alter table query_history alter column created_by type to bigint" duration=271.412µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.955314302Z level=info msg="Executing migration" id="rbac disabled migrator" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.955366662Z level=info msg="Migration successfully executed" id="rbac disabled migrator" duration=55.78µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.961294466Z level=info msg="Executing migration" id="teams permissions migration" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.962713682Z level=info msg="Migration successfully executed" id="teams permissions migration" duration=1.419896ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.96728049Z level=info msg="Executing migration" id="dashboard permissions" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.968410365Z level=info msg="Migration successfully executed" id="dashboard permissions" duration=1.131255ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.973056084Z level=info msg="Executing migration" id="dashboard permissions uid scopes" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.973869327Z level=info msg="Migration successfully executed" id="dashboard permissions uid scopes" duration=814.123µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.977510601Z level=info msg="Executing migration" id="drop managed folder create actions" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.977775432Z level=info msg="Migration successfully executed" id="drop managed folder create actions" duration=269.251µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.983628966Z level=info msg="Executing migration" id="alerting notification permissions" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.984036537Z level=info msg="Migration successfully executed" id="alerting notification permissions" duration=407.601µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.989082338Z level=info msg="Executing migration" id="create query_history_star table v1" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.990982286Z level=info msg="Migration successfully executed" id="create query_history_star table v1" duration=1.899038ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:00.998204824Z level=info msg="Executing migration" id="add index query_history.user_id-query_uid" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.000632314Z level=info msg="Migration successfully executed" id="add index query_history.user_id-query_uid" duration=2.42931ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.005411367Z level=info msg="Executing migration" id="add column org_id in query_history_star" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.014197455Z level=info msg="Migration successfully executed" id="add column org_id in query_history_star" duration=8.784748ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.017743306Z level=info msg="Executing migration" id="alter table query_history_star_mig column user_id type to bigint" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.017868878Z level=info msg="Migration successfully executed" id="alter table query_history_star_mig column user_id type to bigint" duration=124.752µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.020358562Z level=info msg="Executing migration" id="create correlation table v1" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.021087718Z level=info msg="Migration successfully executed" id="create correlation table v1" duration=728.856µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.024438008Z level=info msg="Executing migration" id="add index correlations.uid" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.026008297Z level=info msg="Migration successfully executed" id="add index correlations.uid" duration=1.569529ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.046511672Z level=info msg="Executing migration" id="add index correlations.source_uid" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.048786446Z level=info msg="Migration successfully executed" id="add index correlations.source_uid" duration=2.274374ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.05444518Z level=info msg="Executing migration" id="add correlation config column" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.064790353Z level=info msg="Migration successfully executed" id="add correlation config column" duration=10.346233ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.107911225Z level=info msg="Executing migration" id="drop index IDX_correlation_uid - v1" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.109905877Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_uid - v1" duration=2.003922ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.116559378Z level=info msg="Executing migration" id="drop index IDX_correlation_source_uid - v1" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.117677324Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_source_uid - v1" duration=1.118116ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.121300686Z level=info msg="Executing migration" id="Rename table correlation to correlation_tmp_qwerty - v1" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.157851118Z level=info msg="Migration successfully executed" id="Rename table correlation to correlation_tmp_qwerty - v1" duration=36.547742ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.163695435Z level=info msg="Executing migration" id="create correlation v2" 23:16:33 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:33 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 23:16:33 policy-pap | sasl.oauthbearer.scope.claim.name = scope 23:16:33 policy-pap | sasl.oauthbearer.sub.claim.name = sub 23:16:33 policy-pap | sasl.oauthbearer.token.endpoint.url = null 23:16:33 policy-pap | security.protocol = PLAINTEXT 23:16:33 policy-pap | security.providers = null 23:16:33 policy-pap | send.buffer.bytes = 131072 23:16:33 policy-pap | socket.connection.setup.timeout.max.ms = 30000 23:16:33 policy-pap | socket.connection.setup.timeout.ms = 10000 23:16:33 policy-pap | ssl.cipher.suites = null 23:16:33 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:33 policy-pap | ssl.endpoint.identification.algorithm = https 23:16:33 policy-pap | ssl.engine.factory.class = null 23:16:33 policy-pap | ssl.key.password = null 23:16:33 policy-pap | ssl.keymanager.algorithm = SunX509 23:16:33 policy-pap | ssl.keystore.certificate.chain = null 23:16:33 policy-pap | ssl.keystore.key = null 23:16:33 policy-pap | ssl.keystore.location = null 23:16:33 policy-pap | ssl.keystore.password = null 23:16:33 policy-pap | ssl.keystore.type = JKS 23:16:33 policy-pap | ssl.protocol = TLSv1.3 23:16:33 policy-pap | ssl.provider = null 23:16:33 policy-pap | ssl.secure.random.implementation = null 23:16:33 policy-pap | ssl.trustmanager.algorithm = PKIX 23:16:33 policy-pap | ssl.truststore.certificates = null 23:16:33 policy-pap | ssl.truststore.location = null 23:16:33 policy-pap | ssl.truststore.password = null 23:16:33 policy-pap | ssl.truststore.type = JKS 23:16:33 policy-pap | transaction.timeout.ms = 60000 23:16:33 policy-pap | transactional.id = null 23:16:33 policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer 23:16:33 policy-pap | 23:16:33 policy-pap | [2024-03-01T23:14:35.524+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-1] Instantiated an idempotent producer. 23:16:33 policy-pap | [2024-03-01T23:14:35.542+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 23:16:33 policy-pap | [2024-03-01T23:14:35.542+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 23:16:33 policy-pap | [2024-03-01T23:14:35.542+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1709334875542 23:16:33 policy-pap | [2024-03-01T23:14:35.542+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=b0aacfbd-0e2f-4d79-9605-b4e7efdadb74, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created 23:16:33 policy-pap | [2024-03-01T23:14:35.542+00:00|INFO|InlineBusTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=faac204d-0282-4121-ab93-f8c193d51656, alive=false, publisher=null]]: starting 23:16:33 policy-pap | [2024-03-01T23:14:35.543+00:00|INFO|ProducerConfig|main] ProducerConfig values: 23:16:33 policy-pap | acks = -1 23:16:33 policy-pap | auto.include.jmx.reporter = true 23:16:33 policy-pap | batch.size = 16384 23:16:33 policy-pap | bootstrap.servers = [kafka:9092] 23:16:33 policy-pap | buffer.memory = 33554432 23:16:33 policy-pap | client.dns.lookup = use_all_dns_ips 23:16:33 policy-pap | client.id = producer-2 23:16:33 policy-pap | compression.type = none 23:16:33 policy-pap | connections.max.idle.ms = 540000 23:16:33 policy-pap | delivery.timeout.ms = 120000 23:16:33 policy-pap | enable.idempotence = true 23:16:33 policy-pap | interceptor.classes = [] 23:16:33 policy-pap | key.serializer = class org.apache.kafka.common.serialization.StringSerializer 23:16:33 policy-pap | linger.ms = 0 23:16:33 policy-pap | max.block.ms = 60000 23:16:33 policy-pap | max.in.flight.requests.per.connection = 5 23:16:33 policy-pap | max.request.size = 1048576 23:16:33 policy-pap | metadata.max.age.ms = 300000 23:16:33 policy-pap | metadata.max.idle.ms = 300000 23:16:33 policy-pap | metric.reporters = [] 23:16:33 policy-pap | metrics.num.samples = 2 23:16:33 policy-pap | metrics.recording.level = INFO 23:16:33 policy-pap | metrics.sample.window.ms = 30000 23:16:33 policy-pap | partitioner.adaptive.partitioning.enable = true 23:16:33 policy-pap | partitioner.availability.timeout.ms = 0 23:16:33 policy-pap | partitioner.class = null 23:16:33 policy-pap | partitioner.ignore.keys = false 23:16:33 policy-db-migrator | > upgrade 0460-pdppolicystatus.sql 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdppolicystatus (DEPLOY BOOLEAN DEFAULT 0, PDPGROUP VARCHAR(255) DEFAULT NULL, PDPTYPE VARCHAR(255) DEFAULT NULL, STATE INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PRIMARY KEY PK_PDPPOLICYSTATUS (parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | > upgrade 0470-pdp.sql 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdp (HEALTHY INT DEFAULT NULL, MESSAGE VARCHAR(255) DEFAULT NULL, PDPSTATE INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDP (parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | > upgrade 0480-pdpstatistics.sql 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpstatistics (PDPGROUPNAME VARCHAR(120) NULL, PDPSUBGROUPNAME VARCHAR(120) NULL, POLICYDEPLOYCOUNT BIGINT DEFAULT NULL, POLICYDEPLOYFAILCOUNT BIGINT DEFAULT NULL, POLICYDEPLOYSUCCESSCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDFAILCOUNT BIGINT DEFAULT NULL, POLICYEXECUTEDSUCCESSCOUNT BIGINT DEFAULT NULL, timeStamp datetime NOT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_PDPSTATISTICS (timeStamp, name, version)) 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | > upgrade 0490-pdpsubgroup_pdp.sql 23:16:33 policy-db-migrator | -------------- 23:16:33 kafka | transaction.partition.verification.enable = true 23:16:33 kafka | transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 23:16:33 kafka | transaction.state.log.load.buffer.size = 5242880 23:16:33 kafka | transaction.state.log.min.isr = 2 23:16:33 kafka | transaction.state.log.num.partitions = 50 23:16:33 kafka | transaction.state.log.replication.factor = 3 23:16:33 kafka | transaction.state.log.segment.bytes = 104857600 23:16:33 kafka | transactional.id.expiration.ms = 604800000 23:16:33 kafka | unclean.leader.election.enable = false 23:16:33 kafka | unstable.api.versions.enable = false 23:16:33 kafka | zookeeper.clientCnxnSocket = null 23:16:33 kafka | zookeeper.connect = zookeeper:2181 23:16:33 kafka | zookeeper.connection.timeout.ms = null 23:16:33 kafka | zookeeper.max.in.flight.requests = 10 23:16:33 kafka | zookeeper.metadata.migration.enable = false 23:16:33 kafka | zookeeper.session.timeout.ms = 18000 23:16:33 kafka | zookeeper.set.acl = false 23:16:33 kafka | zookeeper.ssl.cipher.suites = null 23:16:33 kafka | zookeeper.ssl.client.enable = false 23:16:33 kafka | zookeeper.ssl.crl.enable = false 23:16:33 kafka | zookeeper.ssl.enabled.protocols = null 23:16:33 kafka | zookeeper.ssl.endpoint.identification.algorithm = HTTPS 23:16:33 kafka | zookeeper.ssl.keystore.location = null 23:16:33 kafka | zookeeper.ssl.keystore.password = null 23:16:33 kafka | zookeeper.ssl.keystore.type = null 23:16:33 kafka | zookeeper.ssl.ocsp.enable = false 23:16:33 kafka | zookeeper.ssl.protocol = TLSv1.2 23:16:33 kafka | zookeeper.ssl.truststore.location = null 23:16:33 kafka | zookeeper.ssl.truststore.password = null 23:16:33 kafka | zookeeper.ssl.truststore.type = null 23:16:33 kafka | (kafka.server.KafkaConfig) 23:16:33 kafka | [2024-03-01 23:14:06,617] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 23:16:33 kafka | [2024-03-01 23:14:06,617] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 23:16:33 kafka | [2024-03-01 23:14:06,618] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 23:16:33 kafka | [2024-03-01 23:14:06,621] INFO [ThrottledChannelReaper-ControllerMutation]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) 23:16:33 kafka | [2024-03-01 23:14:06,651] INFO Loading logs from log dirs ArraySeq(/var/lib/kafka/data) (kafka.log.LogManager) 23:16:33 kafka | [2024-03-01 23:14:06,658] INFO No logs found to be loaded in /var/lib/kafka/data (kafka.log.LogManager) 23:16:33 kafka | [2024-03-01 23:14:06,670] INFO Loaded 0 logs in 20ms (kafka.log.LogManager) 23:16:33 kafka | [2024-03-01 23:14:06,672] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager) 23:16:33 kafka | [2024-03-01 23:14:06,674] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager) 23:16:33 kafka | [2024-03-01 23:14:06,686] INFO Starting the log cleaner (kafka.log.LogCleaner) 23:16:33 kafka | [2024-03-01 23:14:06,739] INFO [kafka-log-cleaner-thread-0]: Starting (kafka.log.LogCleaner$CleanerThread) 23:16:33 kafka | [2024-03-01 23:14:06,781] INFO [feature-zk-node-event-process-thread]: Starting (kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread) 23:16:33 kafka | [2024-03-01 23:14:06,795] INFO Feature ZK node at path: /feature does not exist (kafka.server.FinalizedFeatureChangeListener) 23:16:33 kafka | [2024-03-01 23:14:06,823] INFO [zk-broker-1-to-controller-forwarding-channel-manager]: Starting (kafka.server.BrokerToControllerRequestThread) 23:16:33 kafka | [2024-03-01 23:14:07,153] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) 23:16:33 kafka | [2024-03-01 23:14:07,172] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT) (kafka.network.SocketServer) 23:16:33 kafka | [2024-03-01 23:14:07,172] INFO Updated connection-accept-rate max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) 23:16:33 kafka | [2024-03-01 23:14:07,177] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Created data-plane acceptor and processors for endpoint : ListenerName(PLAINTEXT_HOST) (kafka.network.SocketServer) 23:16:33 kafka | [2024-03-01 23:14:07,181] INFO [zk-broker-1-to-controller-alter-partition-channel-manager]: Starting (kafka.server.BrokerToControllerRequestThread) 23:16:33 kafka | [2024-03-01 23:14:07,205] INFO [ExpirationReaper-1-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:16:33 kafka | [2024-03-01 23:14:07,206] INFO [ExpirationReaper-1-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:16:33 kafka | [2024-03-01 23:14:07,208] INFO [ExpirationReaper-1-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:16:33 kafka | [2024-03-01 23:14:07,209] INFO [ExpirationReaper-1-ElectLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:16:33 kafka | [2024-03-01 23:14:07,210] INFO [ExpirationReaper-1-RemoteFetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:16:33 kafka | [2024-03-01 23:14:07,228] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler) 23:16:33 kafka | [2024-03-01 23:14:07,229] INFO [AddPartitionsToTxnSenderThread-1]: Starting (kafka.server.AddPartitionsToTxnManager) 23:16:33 kafka | [2024-03-01 23:14:07,253] INFO Creating /brokers/ids/1 (is it secure? false) (kafka.zk.KafkaZkClient) 23:16:33 kafka | [2024-03-01 23:14:07,283] INFO Stat of the created znode at /brokers/ids/1 is: 27,27,1709334847267,1709334847267,1,0,0,72057608820883457,258,0,27 23:16:33 kafka | (kafka.zk.KafkaZkClient) 23:16:33 kafka | [2024-03-01 23:14:07,284] INFO Registered broker 1 at path /brokers/ids/1 with addresses: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092, czxid (broker epoch): 27 (kafka.zk.KafkaZkClient) 23:16:33 kafka | [2024-03-01 23:14:07,344] INFO [ControllerEventThread controllerId=1] Starting (kafka.controller.ControllerEventManager$ControllerEventThread) 23:16:33 kafka | [2024-03-01 23:14:07,351] INFO [ExpirationReaper-1-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:16:33 kafka | [2024-03-01 23:14:07,357] INFO [ExpirationReaper-1-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:16:33 kafka | [2024-03-01 23:14:07,359] INFO [ExpirationReaper-1-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:16:33 kafka | [2024-03-01 23:14:07,364] INFO Successfully created /controller_epoch with initial epoch 0 (kafka.zk.KafkaZkClient) 23:16:33 kafka | [2024-03-01 23:14:07,372] INFO [Controller id=1] 1 successfully elected as the controller. Epoch incremented to 1 and epoch zk version is now 1 (kafka.controller.KafkaController) 23:16:33 kafka | [2024-03-01 23:14:07,373] INFO [GroupCoordinator 1]: Starting up. (kafka.coordinator.group.GroupCoordinator) 23:16:33 kafka | [2024-03-01 23:14:07,377] INFO [Controller id=1] Creating FeatureZNode at path: /feature with contents: FeatureZNode(2,Enabled,Map()) (kafka.controller.KafkaController) 23:16:33 kafka | [2024-03-01 23:14:07,378] INFO [GroupCoordinator 1]: Startup complete. (kafka.coordinator.group.GroupCoordinator) 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.16455641Z level=info msg="Migration successfully executed" id="create correlation v2" duration=861.044µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.171457771Z level=info msg="Executing migration" id="create index IDX_correlation_uid - v2" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.173291672Z level=info msg="Migration successfully executed" id="create index IDX_correlation_uid - v2" duration=1.832011ms 23:16:33 policy-pap | receive.buffer.bytes = 32768 23:16:33 policy-pap | reconnect.backoff.max.ms = 1000 23:16:33 policy-pap | reconnect.backoff.ms = 50 23:16:33 policy-pap | request.timeout.ms = 30000 23:16:33 policy-pap | retries = 2147483647 23:16:33 policy-pap | retry.backoff.ms = 100 23:16:33 policy-pap | sasl.client.callback.handler.class = null 23:16:33 policy-pap | sasl.jaas.config = null 23:16:33 policy-pap | sasl.kerberos.kinit.cmd = /usr/bin/kinit 23:16:33 policy-pap | sasl.kerberos.min.time.before.relogin = 60000 23:16:33 policy-pap | sasl.kerberos.service.name = null 23:16:33 policy-pap | sasl.kerberos.ticket.renew.jitter = 0.05 23:16:33 policy-pap | sasl.kerberos.ticket.renew.window.factor = 0.8 23:16:33 policy-pap | sasl.login.callback.handler.class = null 23:16:33 policy-pap | sasl.login.class = null 23:16:33 policy-pap | sasl.login.connect.timeout.ms = null 23:16:33 policy-pap | sasl.login.read.timeout.ms = null 23:16:33 policy-pap | sasl.login.refresh.buffer.seconds = 300 23:16:33 policy-pap | sasl.login.refresh.min.period.seconds = 60 23:16:33 policy-pap | sasl.login.refresh.window.factor = 0.8 23:16:33 policy-pap | sasl.login.refresh.window.jitter = 0.05 23:16:33 policy-pap | sasl.login.retry.backoff.max.ms = 10000 23:16:33 policy-pap | sasl.login.retry.backoff.ms = 100 23:16:33 policy-pap | sasl.mechanism = GSSAPI 23:16:33 policy-pap | sasl.oauthbearer.clock.skew.seconds = 30 23:16:33 policy-pap | sasl.oauthbearer.expected.audience = null 23:16:33 policy-pap | sasl.oauthbearer.expected.issuer = null 23:16:33 policy-pap | sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000 23:16:33 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000 23:16:33 policy-pap | sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100 23:16:33 policy-pap | sasl.oauthbearer.jwks.endpoint.url = null 23:16:33 policy-pap | sasl.oauthbearer.scope.claim.name = scope 23:16:33 policy-pap | sasl.oauthbearer.sub.claim.name = sub 23:16:33 policy-pap | sasl.oauthbearer.token.endpoint.url = null 23:16:33 policy-pap | security.protocol = PLAINTEXT 23:16:33 policy-pap | security.providers = null 23:16:33 policy-pap | send.buffer.bytes = 131072 23:16:33 policy-pap | socket.connection.setup.timeout.max.ms = 30000 23:16:33 policy-pap | socket.connection.setup.timeout.ms = 10000 23:16:33 policy-pap | ssl.cipher.suites = null 23:16:33 policy-pap | ssl.enabled.protocols = [TLSv1.2, TLSv1.3] 23:16:33 policy-pap | ssl.endpoint.identification.algorithm = https 23:16:33 policy-pap | ssl.engine.factory.class = null 23:16:33 policy-pap | ssl.key.password = null 23:16:33 policy-pap | ssl.keymanager.algorithm = SunX509 23:16:33 policy-pap | ssl.keystore.certificate.chain = null 23:16:33 policy-pap | ssl.keystore.key = null 23:16:33 policy-pap | ssl.keystore.location = null 23:16:33 policy-pap | ssl.keystore.password = null 23:16:33 policy-pap | ssl.keystore.type = JKS 23:16:33 policy-pap | ssl.protocol = TLSv1.3 23:16:33 policy-pap | ssl.provider = null 23:16:33 policy-pap | ssl.secure.random.implementation = null 23:16:33 policy-pap | ssl.trustmanager.algorithm = PKIX 23:16:33 policy-pap | ssl.truststore.certificates = null 23:16:33 policy-pap | ssl.truststore.location = null 23:16:33 policy-pap | ssl.truststore.password = null 23:16:33 policy-pap | ssl.truststore.type = JKS 23:16:33 policy-pap | transaction.timeout.ms = 60000 23:16:33 policy-pap | transactional.id = null 23:16:33 policy-pap | value.serializer = class org.apache.kafka.common.serialization.StringSerializer 23:16:33 policy-pap | 23:16:33 policy-pap | [2024-03-01T23:14:35.543+00:00|INFO|KafkaProducer|main] [Producer clientId=producer-2] Instantiated an idempotent producer. 23:16:33 policy-pap | [2024-03-01T23:14:35.545+00:00|INFO|AppInfoParser|main] Kafka version: 3.6.1 23:16:33 policy-pap | [2024-03-01T23:14:35.545+00:00|INFO|AppInfoParser|main] Kafka commitId: 5e3c2b738d253ff5 23:16:33 policy-pap | [2024-03-01T23:14:35.546+00:00|INFO|AppInfoParser|main] Kafka startTimeMs: 1709334875545 23:16:33 policy-pap | [2024-03-01T23:14:35.546+00:00|INFO|InlineKafkaTopicSink|main] InlineKafkaTopicSink [getTopicCommInfrastructure()=KAFKA, toString()=InlineBusTopicSink [partitionId=faac204d-0282-4121-ab93-f8c193d51656, alive=false, publisher=KafkaPublisherWrapper []]]: KAFKA SINK created 23:16:33 policy-pap | [2024-03-01T23:14:35.546+00:00|INFO|ServiceManager|main] Policy PAP starting PAP Activator 23:16:33 policy-pap | [2024-03-01T23:14:35.546+00:00|INFO|ServiceManager|main] Policy PAP starting PDP publisher 23:16:33 policy-pap | [2024-03-01T23:14:35.548+00:00|INFO|ServiceManager|main] Policy PAP starting Policy Notification publisher 23:16:33 policy-pap | [2024-03-01T23:14:35.549+00:00|INFO|ServiceManager|main] Policy PAP starting PDP update timers 23:16:33 policy-pap | [2024-03-01T23:14:35.550+00:00|INFO|ServiceManager|main] Policy PAP starting PDP state-change timers 23:16:33 policy-pap | [2024-03-01T23:14:35.552+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification lock 23:16:33 policy-pap | [2024-03-01T23:14:35.552+00:00|INFO|ServiceManager|main] Policy PAP starting PDP modification requests 23:16:33 policy-pap | [2024-03-01T23:14:35.552+00:00|INFO|TimerManager|Thread-10] timer manager state-change started 23:16:33 policy-pap | [2024-03-01T23:14:35.550+00:00|INFO|TimerManager|Thread-9] timer manager update started 23:16:33 policy-pap | [2024-03-01T23:14:35.553+00:00|INFO|ServiceManager|main] Policy PAP starting PDP expiration timer 23:16:33 policy-pap | [2024-03-01T23:14:35.553+00:00|INFO|ServiceManager|main] Policy PAP started 23:16:33 policy-pap | [2024-03-01T23:14:35.555+00:00|INFO|PolicyPapApplication|main] Started PolicyPapApplication in 10.271 seconds (process running for 10.898) 23:16:33 policy-pap | [2024-03-01T23:14:36.021+00:00|INFO|Metadata|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] Cluster ID: M5MPGqipTkazoD2lpH5myg 23:16:33 policy-pap | [2024-03-01T23:14:36.029+00:00|INFO|Metadata|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] Cluster ID: M5MPGqipTkazoD2lpH5myg 23:16:33 policy-pap | [2024-03-01T23:14:36.041+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 2 : {policy-pdp-pap=UNKNOWN_TOPIC_OR_PARTITION} 23:16:33 policy-pap | [2024-03-01T23:14:36.041+00:00|INFO|Metadata|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Cluster ID: M5MPGqipTkazoD2lpH5myg 23:16:33 policy-pap | [2024-03-01T23:14:36.089+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-b06317e2-ac80-4179-891e-43beb77f3709-3, groupId=b06317e2-ac80-4179-891e-43beb77f3709] Error while fetching metadata with correlation id 2 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:33 policy-pap | [2024-03-01T23:14:36.089+00:00|INFO|Metadata|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-b06317e2-ac80-4179-891e-43beb77f3709-3, groupId=b06317e2-ac80-4179-891e-43beb77f3709] Cluster ID: M5MPGqipTkazoD2lpH5myg 23:16:33 policy-pap | [2024-03-01T23:14:36.146+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 4 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:33 policy-pap | [2024-03-01T23:14:36.167+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-2] [Producer clientId=producer-2] ProducerId set to 1 with epoch 0 23:16:33 policy-pap | [2024-03-01T23:14:36.168+00:00|INFO|TransactionManager|kafka-producer-network-thread | producer-1] [Producer clientId=producer-1] ProducerId set to 0 with epoch 0 23:16:33 policy-pap | [2024-03-01T23:14:36.204+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-b06317e2-ac80-4179-891e-43beb77f3709-3, groupId=b06317e2-ac80-4179-891e-43beb77f3709] Error while fetching metadata with correlation id 4 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:33 policy-pap | [2024-03-01T23:14:36.271+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 6 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:33 policy-pap | [2024-03-01T23:14:36.318+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-b06317e2-ac80-4179-891e-43beb77f3709-3, groupId=b06317e2-ac80-4179-891e-43beb77f3709] Error while fetching metadata with correlation id 6 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:33 policy-pap | [2024-03-01T23:14:36.380+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 8 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:33 policy-pap | [2024-03-01T23:14:36.434+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-b06317e2-ac80-4179-891e-43beb77f3709-3, groupId=b06317e2-ac80-4179-891e-43beb77f3709] Error while fetching metadata with correlation id 8 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:33 policy-pap | [2024-03-01T23:14:36.512+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 10 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:33 policy-pap | [2024-03-01T23:14:36.544+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-b06317e2-ac80-4179-891e-43beb77f3709-3, groupId=b06317e2-ac80-4179-891e-43beb77f3709] Error while fetching metadata with correlation id 10 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:33 policy-pap | [2024-03-01T23:14:36.619+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 12 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:33 policy-pap | [2024-03-01T23:14:36.650+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-b06317e2-ac80-4179-891e-43beb77f3709-3, groupId=b06317e2-ac80-4179-891e-43beb77f3709] Error while fetching metadata with correlation id 12 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:33 policy-pap | [2024-03-01T23:14:36.740+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 14 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:33 policy-pap | [2024-03-01T23:14:36.762+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-b06317e2-ac80-4179-891e-43beb77f3709-3, groupId=b06317e2-ac80-4179-891e-43beb77f3709] Error while fetching metadata with correlation id 14 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:33 policy-pap | [2024-03-01T23:14:36.851+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 16 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:33 policy-pap | [2024-03-01T23:14:36.867+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-b06317e2-ac80-4179-891e-43beb77f3709-3, groupId=b06317e2-ac80-4179-891e-43beb77f3709] Error while fetching metadata with correlation id 16 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:33 policy-pap | [2024-03-01T23:14:36.956+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 18 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:33 policy-pap | [2024-03-01T23:14:36.973+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-b06317e2-ac80-4179-891e-43beb77f3709-3, groupId=b06317e2-ac80-4179-891e-43beb77f3709] Error while fetching metadata with correlation id 18 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.177501279Z level=info msg="Executing migration" id="create index IDX_correlation_source_uid - v2" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.180040203Z level=info msg="Migration successfully executed" id="create index IDX_correlation_source_uid - v2" duration=2.546995ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.188566495Z level=info msg="Executing migration" id="create index IDX_correlation_org_id - v2" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.189861793Z level=info msg="Migration successfully executed" id="create index IDX_correlation_org_id - v2" duration=1.296098ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.196188181Z level=info msg="Executing migration" id="copy correlation v1 to v2" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.196502953Z level=info msg="Migration successfully executed" id="copy correlation v1 to v2" duration=314.692µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.201230942Z level=info msg="Executing migration" id="drop correlation_tmp_qwerty" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.202044437Z level=info msg="Migration successfully executed" id="drop correlation_tmp_qwerty" duration=811.295µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.20752828Z level=info msg="Executing migration" id="add provisioning column" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.217645922Z level=info msg="Migration successfully executed" id="add provisioning column" duration=10.116732ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.22385533Z level=info msg="Executing migration" id="create entity_events table" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.224567294Z level=info msg="Migration successfully executed" id="create entity_events table" duration=710.924µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.228465368Z level=info msg="Executing migration" id="create dashboard public config v1" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.229230563Z level=info msg="Migration successfully executed" id="create dashboard public config v1" duration=766.265µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.235671731Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v1" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.236141434Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index UQE_dashboard_public_config_uid - v1" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.239793677Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.24023812Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.244017432Z level=info msg="Executing migration" id="Drop old dashboard public config table" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.245381071Z level=info msg="Migration successfully executed" id="Drop old dashboard public config table" duration=1.365509ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.2502783Z level=info msg="Executing migration" id="recreate dashboard public config v1" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.251700689Z level=info msg="Migration successfully executed" id="recreate dashboard public config v1" duration=1.422079ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.256641009Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v1" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.26000496Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v1" duration=3.363881ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.263874623Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.265071811Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" duration=1.202098ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.271267878Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v2" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.272439005Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_public_config_uid - v2" duration=1.171297ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.276036287Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.277912898Z level=info msg="Migration successfully executed" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.881741ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.281673962Z level=info msg="Executing migration" id="Drop public config table" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.283041229Z level=info msg="Migration successfully executed" id="Drop public config table" duration=1.366507ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.28797702Z level=info msg="Executing migration" id="Recreate dashboard public config v2" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.288969325Z level=info msg="Migration successfully executed" id="Recreate dashboard public config v2" duration=991.785µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.292560338Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v2" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.294345608Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v2" duration=1.78473ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.299977772Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.30119745Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.220328ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.304695791Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_access_token - v2" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.305871798Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_access_token - v2" duration=1.180497ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.309623402Z level=info msg="Executing migration" id="Rename table dashboard_public_config to dashboard_public - v2" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.340180357Z level=info msg="Migration successfully executed" id="Rename table dashboard_public_config to dashboard_public - v2" duration=30.557455ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.346939958Z level=info msg="Executing migration" id="add annotations_enabled column" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.355992153Z level=info msg="Migration successfully executed" id="add annotations_enabled column" duration=9.049475ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.359310943Z level=info msg="Executing migration" id="add time_selection_enabled column" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.367710394Z level=info msg="Migration successfully executed" id="add time_selection_enabled column" duration=8.398131ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.370994375Z level=info msg="Executing migration" id="delete orphaned public dashboards" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.371161026Z level=info msg="Migration successfully executed" id="delete orphaned public dashboards" duration=166.661µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.374192764Z level=info msg="Executing migration" id="add share column" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.380462542Z level=info msg="Migration successfully executed" id="add share column" duration=6.269188ms 23:16:33 policy-pap | [2024-03-01T23:14:37.060+00:00|WARN|NetworkClient|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Error while fetching metadata with correlation id 20 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:33 policy-pap | [2024-03-01T23:14:37.077+00:00|WARN|NetworkClient|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-b06317e2-ac80-4179-891e-43beb77f3709-3, groupId=b06317e2-ac80-4179-891e-43beb77f3709] Error while fetching metadata with correlation id 20 : {policy-pdp-pap=LEADER_NOT_AVAILABLE} 23:16:33 policy-pap | [2024-03-01T23:14:37.171+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) 23:16:33 policy-pap | [2024-03-01T23:14:37.181+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group 23:16:33 policy-pap | [2024-03-01T23:14:37.184+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-b06317e2-ac80-4179-891e-43beb77f3709-3, groupId=b06317e2-ac80-4179-891e-43beb77f3709] Discovered group coordinator kafka:9092 (id: 2147483646 rack: null) 23:16:33 policy-pap | [2024-03-01T23:14:37.186+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-b06317e2-ac80-4179-891e-43beb77f3709-3, groupId=b06317e2-ac80-4179-891e-43beb77f3709] (Re-)joining group 23:16:33 policy-pap | [2024-03-01T23:14:37.223+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-b06317e2-ac80-4179-891e-43beb77f3709-3, groupId=b06317e2-ac80-4179-891e-43beb77f3709] Request joining group due to: need to re-join with the given member-id: consumer-b06317e2-ac80-4179-891e-43beb77f3709-3-b170df36-31f5-40b4-8d44-edc08e3f3a00 23:16:33 policy-pap | [2024-03-01T23:14:37.223+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-b06317e2-ac80-4179-891e-43beb77f3709-3, groupId=b06317e2-ac80-4179-891e-43beb77f3709] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) 23:16:33 policy-pap | [2024-03-01T23:14:37.223+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-b06317e2-ac80-4179-891e-43beb77f3709-3, groupId=b06317e2-ac80-4179-891e-43beb77f3709] (Re-)joining group 23:16:33 policy-pap | [2024-03-01T23:14:37.224+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: need to re-join with the given member-id: consumer-policy-pap-4-ebfd4786-b372-4cbe-8078-b32f1e613bd1 23:16:33 policy-pap | [2024-03-01T23:14:37.225+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Request joining group due to: rebalance failed due to 'The group member needs to have a valid member id before actually entering a consumer group.' (MemberIdRequiredException) 23:16:33 policy-pap | [2024-03-01T23:14:37.225+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] (Re-)joining group 23:16:33 policy-pap | [2024-03-01T23:14:40.268+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-b06317e2-ac80-4179-891e-43beb77f3709-3, groupId=b06317e2-ac80-4179-891e-43beb77f3709] Successfully joined group with generation Generation{generationId=1, memberId='consumer-b06317e2-ac80-4179-891e-43beb77f3709-3-b170df36-31f5-40b4-8d44-edc08e3f3a00', protocol='range'} 23:16:33 policy-pap | [2024-03-01T23:14:40.273+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully joined group with generation Generation{generationId=1, memberId='consumer-policy-pap-4-ebfd4786-b372-4cbe-8078-b32f1e613bd1', protocol='range'} 23:16:33 policy-pap | [2024-03-01T23:14:40.278+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-b06317e2-ac80-4179-891e-43beb77f3709-3, groupId=b06317e2-ac80-4179-891e-43beb77f3709] Finished assignment for group at generation 1: {consumer-b06317e2-ac80-4179-891e-43beb77f3709-3-b170df36-31f5-40b4-8d44-edc08e3f3a00=Assignment(partitions=[policy-pdp-pap-0])} 23:16:33 policy-pap | [2024-03-01T23:14:40.278+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Finished assignment for group at generation 1: {consumer-policy-pap-4-ebfd4786-b372-4cbe-8078-b32f1e613bd1=Assignment(partitions=[policy-pdp-pap-0])} 23:16:33 policy-pap | [2024-03-01T23:14:40.311+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Successfully synced group in generation Generation{generationId=1, memberId='consumer-policy-pap-4-ebfd4786-b372-4cbe-8078-b32f1e613bd1', protocol='range'} 23:16:33 policy-pap | [2024-03-01T23:14:40.312+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) 23:16:33 policy-pap | [2024-03-01T23:14:40.317+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Adding newly assigned partitions: policy-pdp-pap-0 23:16:33 policy-pap | [2024-03-01T23:14:40.322+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-b06317e2-ac80-4179-891e-43beb77f3709-3, groupId=b06317e2-ac80-4179-891e-43beb77f3709] Successfully synced group in generation Generation{generationId=1, memberId='consumer-b06317e2-ac80-4179-891e-43beb77f3709-3-b170df36-31f5-40b4-8d44-edc08e3f3a00', protocol='range'} 23:16:33 policy-pap | [2024-03-01T23:14:40.323+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-b06317e2-ac80-4179-891e-43beb77f3709-3, groupId=b06317e2-ac80-4179-891e-43beb77f3709] Notifying assignor about the new Assignment(partitions=[policy-pdp-pap-0]) 23:16:33 policy-pap | [2024-03-01T23:14:40.323+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-b06317e2-ac80-4179-891e-43beb77f3709-3, groupId=b06317e2-ac80-4179-891e-43beb77f3709] Adding newly assigned partitions: policy-pdp-pap-0 23:16:33 policy-pap | [2024-03-01T23:14:40.341+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-b06317e2-ac80-4179-891e-43beb77f3709-3, groupId=b06317e2-ac80-4179-891e-43beb77f3709] Found no committed offset for partition policy-pdp-pap-0 23:16:33 policy-pap | [2024-03-01T23:14:40.341+00:00|INFO|ConsumerCoordinator|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Found no committed offset for partition policy-pdp-pap-0 23:16:33 policy-pap | [2024-03-01T23:14:40.362+00:00|INFO|SubscriptionState|KAFKA-source-policy-heartbeat] [Consumer clientId=consumer-policy-pap-4, groupId=policy-pap] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. 23:16:33 policy-pap | [2024-03-01T23:14:40.363+00:00|INFO|SubscriptionState|KAFKA-source-policy-pdp-pap] [Consumer clientId=consumer-b06317e2-ac80-4179-891e-43beb77f3709-3, groupId=b06317e2-ac80-4179-891e-43beb77f3709] Resetting offset for partition policy-pdp-pap-0 to position FetchPosition{offset=1, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[kafka:9092 (id: 1 rack: null)], epoch=0}}. 23:16:33 policy-pap | [2024-03-01T23:14:41.597+00:00|INFO|[/policy/pap/v1]|http-nio-6969-exec-4] Initializing Spring DispatcherServlet 'dispatcherServlet' 23:16:33 policy-pap | [2024-03-01T23:14:41.597+00:00|INFO|DispatcherServlet|http-nio-6969-exec-4] Initializing Servlet 'dispatcherServlet' 23:16:33 policy-pap | [2024-03-01T23:14:41.601+00:00|INFO|DispatcherServlet|http-nio-6969-exec-4] Completed initialization in 2 ms 23:16:33 policy-pap | [2024-03-01T23:14:57.528+00:00|INFO|OrderedServiceImpl|KAFKA-source-policy-heartbeat] ***** OrderedServiceImpl implementers: 23:16:33 policy-pap | [] 23:16:33 policy-pap | [2024-03-01T23:14:57.529+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:16:33 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"8df0cb48-9297-480d-ad60-c4ceb603ea4a","timestampMs":1709334897483,"name":"apex-fdc6d847-0af6-485e-9b08-c41bc13abd4f","pdpGroup":"defaultGroup"} 23:16:33 policy-pap | [2024-03-01T23:14:57.529+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:33 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"8df0cb48-9297-480d-ad60-c4ceb603ea4a","timestampMs":1709334897483,"name":"apex-fdc6d847-0af6-485e-9b08-c41bc13abd4f","pdpGroup":"defaultGroup"} 23:16:33 policy-pap | [2024-03-01T23:14:57.537+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus 23:16:33 policy-pap | [2024-03-01T23:14:57.631+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-fdc6d847-0af6-485e-9b08-c41bc13abd4f PdpUpdate starting 23:16:33 policy-pap | [2024-03-01T23:14:57.631+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-fdc6d847-0af6-485e-9b08-c41bc13abd4f PdpUpdate starting listener 23:16:33 policy-pap | [2024-03-01T23:14:57.631+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-fdc6d847-0af6-485e-9b08-c41bc13abd4f PdpUpdate starting timer 23:16:33 policy-pap | [2024-03-01T23:14:57.632+00:00|INFO|TimerManager|KAFKA-source-policy-heartbeat] update timer registered Timer [name=82cc1e45-516f-4977-9810-91f911494c61, expireMs=1709334927632] 23:16:33 policy-pap | [2024-03-01T23:14:57.634+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-fdc6d847-0af6-485e-9b08-c41bc13abd4f PdpUpdate starting enqueue 23:16:33 policy-pap | [2024-03-01T23:14:57.634+00:00|INFO|TimerManager|Thread-9] update timer waiting 29998ms Timer [name=82cc1e45-516f-4977-9810-91f911494c61, expireMs=1709334927632] 23:16:33 policy-pap | [2024-03-01T23:14:57.634+00:00|INFO|ServiceManager|KAFKA-source-policy-heartbeat] apex-fdc6d847-0af6-485e-9b08-c41bc13abd4f PdpUpdate started 23:16:33 policy-pap | [2024-03-01T23:14:57.636+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 23:16:33 policy-pap | {"source":"pap-b0e01990-d32f-4dcf-a2d1-6c45f49736ab","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"82cc1e45-516f-4977-9810-91f911494c61","timestampMs":1709334897617,"name":"apex-fdc6d847-0af6-485e-9b08-c41bc13abd4f","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:33 policy-pap | [2024-03-01T23:14:57.675+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:16:33 policy-pap | {"source":"pap-b0e01990-d32f-4dcf-a2d1-6c45f49736ab","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"82cc1e45-516f-4977-9810-91f911494c61","timestampMs":1709334897617,"name":"apex-fdc6d847-0af6-485e-9b08-c41bc13abd4f","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:33 policy-pap | [2024-03-01T23:14:57.676+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE 23:16:33 policy-pap | [2024-03-01T23:14:57.688+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:33 policy-pap | {"source":"pap-b0e01990-d32f-4dcf-a2d1-6c45f49736ab","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"messageName":"PDP_UPDATE","requestId":"82cc1e45-516f-4977-9810-91f911494c61","timestampMs":1709334897617,"name":"apex-fdc6d847-0af6-485e-9b08-c41bc13abd4f","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:33 policy-pap | [2024-03-01T23:14:57.689+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE 23:16:33 policy-pap | [2024-03-01T23:14:57.694+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:33 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"dca10b46-9c4d-48cd-9c8f-991d2f0a4c73","timestampMs":1709334897683,"name":"apex-fdc6d847-0af6-485e-9b08-c41bc13abd4f","pdpGroup":"defaultGroup"} 23:16:33 policy-pap | [2024-03-01T23:14:57.694+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-pdp-pap] no listeners for autonomous message of type PdpStatus 23:16:33 policy-pap | [2024-03-01T23:14:57.694+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:33 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"82cc1e45-516f-4977-9810-91f911494c61","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"c311dd0a-1819-449e-9e7e-2b2e2fc4e4eb","timestampMs":1709334897684,"name":"apex-fdc6d847-0af6-485e-9b08-c41bc13abd4f","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:33 policy-pap | [2024-03-01T23:14:57.695+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-fdc6d847-0af6-485e-9b08-c41bc13abd4f PdpUpdate stopping 23:16:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpsubgroup_pdp (pdpParentKeyName VARCHAR(120) NOT NULL, pdpParentKeyVersion VARCHAR(15) NOT NULL, pdpParentLocalName VARCHAR(120) NOT NULL, pdpLocalName VARCHAR(120) NOT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPSUBGROUP_PDP (pdpParentKeyName, pdpParentKeyVersion, pdpParentLocalName, pdpLocalName, parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | > upgrade 0500-pdpsubgroup.sql 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS pdpsubgroup (CURRENTINSTANCECOUNT INT DEFAULT NULL, DESIREDINSTANCECOUNT INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_PDPSUBGROUP (parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | > upgrade 0510-toscacapabilityassignment.sql 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignment (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENT(name, version)) 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | > upgrade 0520-toscacapabilityassignments.sql 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignments (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENTS (name, version)) 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | > upgrade 0530-toscacapabilityassignments_toscacapabilityassignment.sql 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilityassignments_toscacapabilityassignment (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCACAPABILITYASSIGNMENTS_TOSCACAPABILITYASSIGNMENT (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | > upgrade 0540-toscacapabilitytype.sql 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPE (name, version)) 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | > upgrade 0550-toscacapabilitytypes.sql 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPES (name, version)) 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | > upgrade 0560-toscacapabilitytypes_toscacapabilitytype.sql 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscacapabilitytypes_toscacapabilitytype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCACAPABILITYTYPES_TOSCACAPABILITYTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | > upgrade 0570-toscadatatype.sql 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCADATATYPE (name, version)) 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | > upgrade 0580-toscadatatypes.sql 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCADATATYPES (name, version)) 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | > upgrade 0590-toscadatatypes_toscadatatype.sql 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscadatatypes_toscadatatype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCADATATYPES_TOSCADATATYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | > upgrade 0600-toscanodetemplate.sql 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplate (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, capabilitiesName VARCHAR(120) NULL, capabilitiesVersion VARCHAR(20) NULL, requirementsVersion VARCHAR(20) NULL, requirementsName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCANODETEMPLATE (name, version)) 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | > upgrade 0610-toscanodetemplates.sql 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplates (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCANODETEMPLATES (name, version)) 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | 23:16:33 kafka | [2024-03-01 23:14:07,383] INFO Feature ZK node created at path: /feature (kafka.server.FinalizedFeatureChangeListener) 23:16:33 kafka | [2024-03-01 23:14:07,394] INFO [TransactionCoordinator id=1] Starting up. (kafka.coordinator.transaction.TransactionCoordinator) 23:16:33 kafka | [2024-03-01 23:14:07,405] INFO [TxnMarkerSenderThread-1]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager) 23:16:33 kafka | [2024-03-01 23:14:07,405] INFO [TransactionCoordinator id=1] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator) 23:16:33 kafka | [2024-03-01 23:14:07,417] INFO [MetadataCache brokerId=1] Updated cache from existing None to latest Features(version=3.6-IV2, finalizedFeatures={}, finalizedFeaturesEpoch=0). (kafka.server.metadata.ZkMetadataCache) 23:16:33 kafka | [2024-03-01 23:14:07,418] INFO [Controller id=1] Registering handlers (kafka.controller.KafkaController) 23:16:33 kafka | [2024-03-01 23:14:07,422] INFO [Controller id=1] Deleting log dir event notifications (kafka.controller.KafkaController) 23:16:33 kafka | [2024-03-01 23:14:07,426] INFO [Controller id=1] Deleting isr change notifications (kafka.controller.KafkaController) 23:16:33 kafka | [2024-03-01 23:14:07,430] INFO [Controller id=1] Initializing controller context (kafka.controller.KafkaController) 23:16:33 kafka | [2024-03-01 23:14:07,442] INFO [ExpirationReaper-1-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) 23:16:33 kafka | [2024-03-01 23:14:07,453] INFO [Controller id=1] Initialized broker epochs cache: HashMap(1 -> 27) (kafka.controller.KafkaController) 23:16:33 kafka | [2024-03-01 23:14:07,460] DEBUG [Controller id=1] Register BrokerModifications handler for Set(1) (kafka.controller.KafkaController) 23:16:33 kafka | [2024-03-01 23:14:07,466] DEBUG [Channel manager on controller 1]: Controller 1 trying to connect to broker 1 (kafka.controller.ControllerChannelManager) 23:16:33 kafka | [2024-03-01 23:14:07,474] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread) 23:16:33 kafka | [2024-03-01 23:14:07,475] INFO [RequestSendThread controllerId=1] Starting (kafka.controller.RequestSendThread) 23:16:33 kafka | [2024-03-01 23:14:07,477] INFO [Controller id=1] Currently active brokers in the cluster: Set(1) (kafka.controller.KafkaController) 23:16:33 kafka | [2024-03-01 23:14:07,477] INFO [Controller id=1] Currently shutting brokers in the cluster: HashSet() (kafka.controller.KafkaController) 23:16:33 kafka | [2024-03-01 23:14:07,478] INFO [Controller id=1] Current list of topics in the cluster: HashSet() (kafka.controller.KafkaController) 23:16:33 kafka | [2024-03-01 23:14:07,478] INFO [Controller id=1] Fetching topic deletions in progress (kafka.controller.KafkaController) 23:16:33 kafka | [2024-03-01 23:14:07,481] INFO [Controller id=1] List of topics to be deleted: (kafka.controller.KafkaController) 23:16:33 kafka | [2024-03-01 23:14:07,481] INFO [Controller id=1] List of topics ineligible for deletion: (kafka.controller.KafkaController) 23:16:33 kafka | [2024-03-01 23:14:07,481] INFO [Controller id=1] Initializing topic deletion manager (kafka.controller.KafkaController) 23:16:33 kafka | [2024-03-01 23:14:07,482] INFO [Topic Deletion Manager 1] Initializing manager with initial deletions: Set(), initial ineligible deletions: HashSet() (kafka.controller.TopicDeletionManager) 23:16:33 kafka | [2024-03-01 23:14:07,483] INFO [Controller id=1] Sending update metadata request (kafka.controller.KafkaController) 23:16:33 kafka | [2024-03-01 23:14:07,484] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1] Enabling request processing. (kafka.network.SocketServer) 23:16:33 kafka | [2024-03-01 23:14:07,486] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 0 partitions (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:07,488] INFO Awaiting socket connections on 0.0.0.0:29092. (kafka.network.DataPlaneAcceptor) 23:16:33 kafka | [2024-03-01 23:14:07,493] INFO [ReplicaStateMachine controllerId=1] Initializing replica state (kafka.controller.ZkReplicaStateMachine) 23:16:33 kafka | [2024-03-01 23:14:07,493] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.DataPlaneAcceptor) 23:16:33 kafka | [2024-03-01 23:14:07,494] INFO [ReplicaStateMachine controllerId=1] Triggering online replica state changes (kafka.controller.ZkReplicaStateMachine) 23:16:33 kafka | [2024-03-01 23:14:07,498] INFO [ReplicaStateMachine controllerId=1] Triggering offline replica state changes (kafka.controller.ZkReplicaStateMachine) 23:16:33 kafka | [2024-03-01 23:14:07,500] DEBUG [ReplicaStateMachine controllerId=1] Started replica state machine with initial state -> HashMap() (kafka.controller.ZkReplicaStateMachine) 23:16:33 kafka | [2024-03-01 23:14:07,500] INFO [PartitionStateMachine controllerId=1] Initializing partition state (kafka.controller.ZkPartitionStateMachine) 23:16:33 kafka | [2024-03-01 23:14:07,502] INFO [PartitionStateMachine controllerId=1] Triggering online partition state changes (kafka.controller.ZkPartitionStateMachine) 23:16:33 kafka | [2024-03-01 23:14:07,504] INFO Kafka version: 7.6.0-ccs (org.apache.kafka.common.utils.AppInfoParser) 23:16:33 kafka | [2024-03-01 23:14:07,504] INFO Kafka commitId: 1991cb733c81d6791626f88253a042b2ec835ab8 (org.apache.kafka.common.utils.AppInfoParser) 23:16:33 kafka | [2024-03-01 23:14:07,504] INFO Kafka startTimeMs: 1709334847498 (org.apache.kafka.common.utils.AppInfoParser) 23:16:33 kafka | [2024-03-01 23:14:07,506] INFO [KafkaServer id=1] started (kafka.server.KafkaServer) 23:16:33 kafka | [2024-03-01 23:14:07,506] DEBUG [PartitionStateMachine controllerId=1] Started partition state machine with initial state -> HashMap() (kafka.controller.ZkPartitionStateMachine) 23:16:33 kafka | [2024-03-01 23:14:07,507] INFO [Controller id=1] Ready to serve as the new controller with epoch 1 (kafka.controller.KafkaController) 23:16:33 kafka | [2024-03-01 23:14:07,507] INFO [RequestSendThread controllerId=1] Controller 1 connected to kafka:9092 (id: 1 rack: null) for sending state change requests (kafka.controller.RequestSendThread) 23:16:33 kafka | [2024-03-01 23:14:07,514] INFO [Controller id=1] Partitions undergoing preferred replica election: (kafka.controller.KafkaController) 23:16:33 kafka | [2024-03-01 23:14:07,514] INFO [Controller id=1] Partitions that completed preferred replica election: (kafka.controller.KafkaController) 23:16:33 kafka | [2024-03-01 23:14:07,515] INFO [Controller id=1] Skipping preferred replica election for partitions due to topic deletion: (kafka.controller.KafkaController) 23:16:33 kafka | [2024-03-01 23:14:07,515] INFO [Controller id=1] Resuming preferred replica election for partitions: (kafka.controller.KafkaController) 23:16:33 kafka | [2024-03-01 23:14:07,517] INFO [Controller id=1] Starting replica leader election (PREFERRED) for partitions triggered by ZkTriggered (kafka.controller.KafkaController) 23:16:33 kafka | [2024-03-01 23:14:07,539] INFO [Controller id=1] Starting the controller scheduler (kafka.controller.KafkaController) 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | > upgrade 0620-toscanodetemplates_toscanodetemplate.sql 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetemplates_toscanodetemplate (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCANODETEMPLATES_TOSCANODETEMPLATE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | > upgrade 0630-toscanodetype.sql 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, requirementsVersion VARCHAR(20) NULL, requirementsName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCANODETYPE (name, version)) 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | > upgrade 0640-toscanodetypes.sql 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCANODETYPES (name, version)) 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | > upgrade 0650-toscanodetypes_toscanodetype.sql 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscanodetypes_toscanodetype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCANODETYPES_TOSCANODETYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | > upgrade 0660-toscaparameter.sql 23:16:33 policy-db-migrator | -------------- 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.385574103Z level=info msg="Executing migration" id="backfill empty share column fields with default of public" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.385775104Z level=info msg="Migration successfully executed" id="backfill empty share column fields with default of public" duration=199.861µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.388900243Z level=info msg="Executing migration" id="create file table" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.389709398Z level=info msg="Migration successfully executed" id="create file table" duration=808.965µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.392838287Z level=info msg="Executing migration" id="file table idx: path natural pk" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.393936124Z level=info msg="Migration successfully executed" id="file table idx: path natural pk" duration=1.094407ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.401233598Z level=info msg="Executing migration" id="file table idx: parent_folder_path_hash fast folder retrieval" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.402276845Z level=info msg="Migration successfully executed" id="file table idx: parent_folder_path_hash fast folder retrieval" duration=1.042777ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.405572375Z level=info msg="Executing migration" id="create file_meta table" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.406701751Z level=info msg="Migration successfully executed" id="create file_meta table" duration=1.134286ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.410022732Z level=info msg="Executing migration" id="file table idx: path key" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.411638311Z level=info msg="Migration successfully executed" id="file table idx: path key" duration=1.614559ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.41970034Z level=info msg="Executing migration" id="set path collation in file table" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.419771031Z level=info msg="Migration successfully executed" id="set path collation in file table" duration=71.071µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.423402503Z level=info msg="Executing migration" id="migrate contents column to mediumblob for MySQL" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.423500923Z level=info msg="Migration successfully executed" id="migrate contents column to mediumblob for MySQL" duration=98.67µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.427517788Z level=info msg="Executing migration" id="managed permissions migration" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.428325713Z level=info msg="Migration successfully executed" id="managed permissions migration" duration=807.745µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.432321747Z level=info msg="Executing migration" id="managed folder permissions alert actions migration" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.432536879Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions migration" duration=215.212µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.43781489Z level=info msg="Executing migration" id="RBAC action name migrator" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.438882297Z level=info msg="Migration successfully executed" id="RBAC action name migrator" duration=1.066127ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.442230027Z level=info msg="Executing migration" id="Add UID column to playlist" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.452691761Z level=info msg="Migration successfully executed" id="Add UID column to playlist" duration=10.460084ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.455991381Z level=info msg="Executing migration" id="Update uid column values in playlist" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.456152692Z level=info msg="Migration successfully executed" id="Update uid column values in playlist" duration=159.341µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.45915334Z level=info msg="Executing migration" id="Add index for uid in playlist" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.460220076Z level=info msg="Migration successfully executed" id="Add index for uid in playlist" duration=1.066196ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.464726184Z level=info msg="Executing migration" id="update group index for alert rules" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.466065533Z level=info msg="Migration successfully executed" id="update group index for alert rules" duration=1.314539ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.470057887Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated migration" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.470310108Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated migration" duration=260.931µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.475140528Z level=info msg="Executing migration" id="admin only folder/dashboard permission" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.476041593Z level=info msg="Migration successfully executed" id="admin only folder/dashboard permission" duration=895.545µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.480964393Z level=info msg="Executing migration" id="add action column to seed_assignment" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.490551931Z level=info msg="Migration successfully executed" id="add action column to seed_assignment" duration=9.587328ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.495721032Z level=info msg="Executing migration" id="add scope column to seed_assignment" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.503279099Z level=info msg="Migration successfully executed" id="add scope column to seed_assignment" duration=7.555377ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.509616467Z level=info msg="Executing migration" id="remove unique index builtin_role_role_name before nullable update" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.510830114Z level=info msg="Migration successfully executed" id="remove unique index builtin_role_role_name before nullable update" duration=1.213477ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.517674566Z level=info msg="Executing migration" id="update seed_assignment role_name column to nullable" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.623889194Z level=info msg="Migration successfully executed" id="update seed_assignment role_name column to nullable" duration=106.212378ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.635226433Z level=info msg="Executing migration" id="add unique index builtin_role_name back" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.63645657Z level=info msg="Migration successfully executed" id="add unique index builtin_role_name back" duration=1.229487ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.641668042Z level=info msg="Executing migration" id="add unique index builtin_role_action_scope" 23:16:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaparameter (VALUE VARCHAR(255) NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPARAMETER (parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | > upgrade 0670-toscapolicies.sql 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicies (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICIES (name, version)) 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | > upgrade 0680-toscapolicies_toscapolicy.sql 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicies_toscapolicy (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPOLICIES_TOSCAPOLICY (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | > upgrade 0690-toscapolicy.sql 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicy (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCAPOLICY (name, version)) 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | > upgrade 0700-toscapolicytype.sql 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICYTYPE (name, version)) 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | > upgrade 0710-toscapolicytypes.sql 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAPOLICYTYPES (name, version)) 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | > upgrade 0720-toscapolicytypes_toscapolicytype.sql 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscapolicytypes_toscapolicytype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAPOLICYTYPES_TOSCAPOLICYTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | > upgrade 0730-toscaproperty.sql 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaproperty (DEFAULTVALUE VARCHAR(255) DEFAULT NULL, `DESCRIPTION` VARCHAR(255) DEFAULT NULL, ENTRYSCHEMA LONGBLOB DEFAULT NULL, REQUIRED BOOLEAN DEFAULT 0, STATUS INT DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, name VARCHAR(120) DEFAULT NULL, version VARCHAR(20) DEFAULT NULL, PRIMARY KEY PK_TOSCAPROPERTY (parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | > upgrade 0740-toscarelationshiptype.sql 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptype (`DESCRIPTION` VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPE (name, version)) 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | > upgrade 0750-toscarelationshiptypes.sql 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptypes (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPES (name, version)) 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | > upgrade 0760-toscarelationshiptypes_toscarelationshiptype.sql 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarelationshiptypes_toscarelationshiptype (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCARELATIONSHIPTYPES_TOSCARELATIONSHIPTYPE (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | > upgrade 0770-toscarequirement.sql 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirement (CAPABILITY VARCHAR(255) NULL, `DESCRIPTION` VARCHAR(255) NULL, NODE VARCHAR(255) NULL, RELATIONSHIP VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, type_name VARCHAR(255) NULL, type_version VARCHAR(255) NULL, PRIMARY KEY PK_TOSCAREQUIREMENT (name, version)) 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | > upgrade 0780-toscarequirements.sql 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirements (name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_TOSCAREQUIREMENTS (name, version)) 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.64294973Z level=info msg="Migration successfully executed" id="add unique index builtin_role_action_scope" duration=1.281198ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.646919864Z level=info msg="Executing migration" id="add primary key to seed_assigment" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.680559749Z level=info msg="Migration successfully executed" id="add primary key to seed_assigment" duration=33.638985ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.685970412Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated fixed migration" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.686278694Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated fixed migration" duration=307.692µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.691431266Z level=info msg="Executing migration" id="managed folder permissions library panel actions migration" 23:16:33 kafka | [2024-03-01 23:14:07,580] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 0 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:07,588] INFO [zk-broker-1-to-controller-alter-partition-channel-manager]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) 23:16:33 kafka | [2024-03-01 23:14:07,635] INFO [zk-broker-1-to-controller-forwarding-channel-manager]: Recorded new controller, from now on will use node kafka:9092 (id: 1 rack: null) (kafka.server.BrokerToControllerRequestThread) 23:16:33 kafka | [2024-03-01 23:14:12,541] INFO [Controller id=1] Processing automatic preferred replica leader election (kafka.controller.KafkaController) 23:16:33 kafka | [2024-03-01 23:14:12,541] TRACE [Controller id=1] Checking need to trigger auto leader balancing (kafka.controller.KafkaController) 23:16:33 kafka | [2024-03-01 23:14:36,040] INFO Creating topic policy-pdp-pap with configuration {} and initial partition assignment HashMap(0 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) 23:16:33 kafka | [2024-03-01 23:14:36,042] INFO Creating topic __consumer_offsets with configuration {compression.type=producer, cleanup.policy=compact, segment.bytes=104857600} and initial partition assignment HashMap(0 -> ArrayBuffer(1), 1 -> ArrayBuffer(1), 2 -> ArrayBuffer(1), 3 -> ArrayBuffer(1), 4 -> ArrayBuffer(1), 5 -> ArrayBuffer(1), 6 -> ArrayBuffer(1), 7 -> ArrayBuffer(1), 8 -> ArrayBuffer(1), 9 -> ArrayBuffer(1), 10 -> ArrayBuffer(1), 11 -> ArrayBuffer(1), 12 -> ArrayBuffer(1), 13 -> ArrayBuffer(1), 14 -> ArrayBuffer(1), 15 -> ArrayBuffer(1), 16 -> ArrayBuffer(1), 17 -> ArrayBuffer(1), 18 -> ArrayBuffer(1), 19 -> ArrayBuffer(1), 20 -> ArrayBuffer(1), 21 -> ArrayBuffer(1), 22 -> ArrayBuffer(1), 23 -> ArrayBuffer(1), 24 -> ArrayBuffer(1), 25 -> ArrayBuffer(1), 26 -> ArrayBuffer(1), 27 -> ArrayBuffer(1), 28 -> ArrayBuffer(1), 29 -> ArrayBuffer(1), 30 -> ArrayBuffer(1), 31 -> ArrayBuffer(1), 32 -> ArrayBuffer(1), 33 -> ArrayBuffer(1), 34 -> ArrayBuffer(1), 35 -> ArrayBuffer(1), 36 -> ArrayBuffer(1), 37 -> ArrayBuffer(1), 38 -> ArrayBuffer(1), 39 -> ArrayBuffer(1), 40 -> ArrayBuffer(1), 41 -> ArrayBuffer(1), 42 -> ArrayBuffer(1), 43 -> ArrayBuffer(1), 44 -> ArrayBuffer(1), 45 -> ArrayBuffer(1), 46 -> ArrayBuffer(1), 47 -> ArrayBuffer(1), 48 -> ArrayBuffer(1), 49 -> ArrayBuffer(1)) (kafka.zk.AdminZkClient) 23:16:33 kafka | [2024-03-01 23:14:36,063] DEBUG [Controller id=1] There is no producerId block yet (Zk path version 0), creating the first block (kafka.controller.KafkaController) 23:16:33 kafka | [2024-03-01 23:14:36,077] INFO [Controller id=1] Acquired new producerId block ProducerIdsBlock(assignedBrokerId=1, firstProducerId=0, size=1000) by writing to Zk with path version 1 (kafka.controller.KafkaController) 23:16:33 policy-db-migrator | > upgrade 0790-toscarequirements_toscarequirement.sql 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscarequirements_toscarequirement (conceptContainerMapName VARCHAR(120) NOT NULL, concpetContainerMapVersion VARCHAR(20) NOT NULL, conceptContainerName VARCHAR(120) NOT NULL, conceptContainerVersion VARCHAR(20) NOT NULL, name VARCHAR(120) NULL, version VARCHAR(20) NULL, PRIMARY KEY PK_TOSCAREQUIREMENTS_TOSCAREQUIREMENT (conceptContainerMapName, concpetContainerMapVersion, conceptContainerName, conceptContainerVersion)) 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | > upgrade 0800-toscaservicetemplate.sql 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscaservicetemplate (`DESCRIPTION` VARCHAR(255) NULL, TOSCADEFINITIONSVERSION VARCHAR(255) NULL, derived_from_name VARCHAR(255) NULL, derived_from_version VARCHAR(255) NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, capabilityTypesVersion VARCHAR(20) NULL, capabilityTypesName VARCHAR(120) NULL, dataTypesName VARCHAR(120) NULL, dataTypesVersion VARCHAR(20) NULL, nodeTypesVersion VARCHAR(20) NULL, nodeTypesName VARCHAR(120) NULL, policyTypesName VARCHAR(120) NULL, policyTypesVersion VARCHAR(20) NULL, relationshipTypesVersion VARCHAR(20) NULL, relationshipTypesName VARCHAR(120) NULL, topologyTemplateLocalName VARCHAR(120) NULL, topologyTemplateParentKeyName VARCHAR(120) NULL, topologyTemplateParentKeyVersion VARCHAR(15) NULL, topologyTemplateParentLocalName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCASERVICETEMPLATE (name, version)) 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | > upgrade 0810-toscatopologytemplate.sql 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscatopologytemplate (`description` VARCHAR(255) NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, nodeTemplatessVersion VARCHAR(20) NULL, nodeTemplatesName VARCHAR(120) NULL, policyVersion VARCHAR(20) NULL, policyName VARCHAR(120) NULL, PRIMARY KEY PK_TOSCATOPOLOGYTEMPLATE (parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | 23:16:33 kafka | [2024-03-01 23:14:36,102] INFO [Controller id=1] New topics: [Set(policy-pdp-pap, __consumer_offsets)], deleted topics: [HashSet()], new partition replica assignment [Set(TopicIdReplicaAssignment(policy-pdp-pap,Some(Ov6CtVzQRUKLA8r33TyiiA),Map(policy-pdp-pap-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))), TopicIdReplicaAssignment(__consumer_offsets,Some(qm-KDGJASSik3LmR-JsdxQ),HashMap(__consumer_offsets-22 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-30 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-25 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-35 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-37 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-38 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-13 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-8 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-21 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-4 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-27 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-7 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-9 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-46 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-41 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-33 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-23 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-49 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-47 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-16 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-28 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-31 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-36 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-42 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-3 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-18 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-15 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-24 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-17 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-48 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-19 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-11 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-2 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-43 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-6 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-14 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-20 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-0 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-44 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-39 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-12 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-45 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-1 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-5 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-26 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-29 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-34 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-10 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-32 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=), __consumer_offsets-40 -> ReplicaAssignment(replicas=1, addingReplicas=, removingReplicas=))))] (kafka.controller.KafkaController) 23:16:33 kafka | [2024-03-01 23:14:36,104] INFO [Controller id=1] New partition creation callback for __consumer_offsets-22,__consumer_offsets-30,__consumer_offsets-25,__consumer_offsets-35,__consumer_offsets-38,__consumer_offsets-13,__consumer_offsets-8,__consumer_offsets-21,__consumer_offsets-4,__consumer_offsets-27,__consumer_offsets-7,__consumer_offsets-9,__consumer_offsets-46,__consumer_offsets-41,__consumer_offsets-33,__consumer_offsets-23,__consumer_offsets-49,__consumer_offsets-47,__consumer_offsets-16,__consumer_offsets-28,__consumer_offsets-31,__consumer_offsets-36,__consumer_offsets-42,__consumer_offsets-3,__consumer_offsets-18,__consumer_offsets-37,policy-pdp-pap-0,__consumer_offsets-15,__consumer_offsets-24,__consumer_offsets-17,__consumer_offsets-48,__consumer_offsets-19,__consumer_offsets-11,__consumer_offsets-2,__consumer_offsets-43,__consumer_offsets-6,__consumer_offsets-14,__consumer_offsets-20,__consumer_offsets-0,__consumer_offsets-44,__consumer_offsets-39,__consumer_offsets-12,__consumer_offsets-45,__consumer_offsets-1,__consumer_offsets-5,__consumer_offsets-26,__consumer_offsets-29,__consumer_offsets-34,__consumer_offsets-10,__consumer_offsets-32,__consumer_offsets-40 (kafka.controller.KafkaController) 23:16:33 kafka | [2024-03-01 23:14:36,106] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,107] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,107] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,107] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,107] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,107] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,107] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,108] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,108] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,108] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,108] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:33 policy-pap | [2024-03-01T23:14:57.695+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-fdc6d847-0af6-485e-9b08-c41bc13abd4f PdpUpdate stopping enqueue 23:16:33 policy-pap | [2024-03-01T23:14:57.695+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-fdc6d847-0af6-485e-9b08-c41bc13abd4f PdpUpdate stopping timer 23:16:33 policy-pap | [2024-03-01T23:14:57.697+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=82cc1e45-516f-4977-9810-91f911494c61, expireMs=1709334927632] 23:16:33 policy-pap | [2024-03-01T23:14:57.697+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-fdc6d847-0af6-485e-9b08-c41bc13abd4f PdpUpdate stopping listener 23:16:33 policy-pap | [2024-03-01T23:14:57.697+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-fdc6d847-0af6-485e-9b08-c41bc13abd4f PdpUpdate stopped 23:16:33 policy-pap | [2024-03-01T23:14:57.698+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:16:33 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp Heartbeat","messageName":"PDP_STATUS","requestId":"dca10b46-9c4d-48cd-9c8f-991d2f0a4c73","timestampMs":1709334897683,"name":"apex-fdc6d847-0af6-485e-9b08-c41bc13abd4f","pdpGroup":"defaultGroup"} 23:16:33 policy-pap | [2024-03-01T23:14:57.706+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-fdc6d847-0af6-485e-9b08-c41bc13abd4f PdpUpdate successful 23:16:33 policy-pap | [2024-03-01T23:14:57.706+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-fdc6d847-0af6-485e-9b08-c41bc13abd4f start publishing next request 23:16:33 policy-pap | [2024-03-01T23:14:57.706+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-fdc6d847-0af6-485e-9b08-c41bc13abd4f PdpStateChange starting 23:16:33 policy-pap | [2024-03-01T23:14:57.706+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-fdc6d847-0af6-485e-9b08-c41bc13abd4f PdpStateChange starting listener 23:16:33 policy-pap | [2024-03-01T23:14:57.706+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-fdc6d847-0af6-485e-9b08-c41bc13abd4f PdpStateChange starting timer 23:16:33 policy-pap | [2024-03-01T23:14:57.706+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer registered Timer [name=1f5da1e6-b10a-4031-a568-7543e4e07892, expireMs=1709334927706] 23:16:33 policy-pap | [2024-03-01T23:14:57.706+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-fdc6d847-0af6-485e-9b08-c41bc13abd4f PdpStateChange starting enqueue 23:16:33 policy-pap | [2024-03-01T23:14:57.706+00:00|INFO|TimerManager|Thread-10] state-change timer waiting 30000ms Timer [name=1f5da1e6-b10a-4031-a568-7543e4e07892, expireMs=1709334927706] 23:16:33 policy-pap | [2024-03-01T23:14:57.706+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-fdc6d847-0af6-485e-9b08-c41bc13abd4f PdpStateChange started 23:16:33 policy-pap | [2024-03-01T23:14:57.707+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 23:16:33 policy-pap | {"source":"pap-b0e01990-d32f-4dcf-a2d1-6c45f49736ab","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"1f5da1e6-b10a-4031-a568-7543e4e07892","timestampMs":1709334897618,"name":"apex-fdc6d847-0af6-485e-9b08-c41bc13abd4f","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:33 policy-pap | [2024-03-01T23:14:57.747+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:33 policy-pap | {"source":"pap-b0e01990-d32f-4dcf-a2d1-6c45f49736ab","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"1f5da1e6-b10a-4031-a568-7543e4e07892","timestampMs":1709334897618,"name":"apex-fdc6d847-0af6-485e-9b08-c41bc13abd4f","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:33 policy-pap | [2024-03-01T23:14:57.747+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_STATE_CHANGE 23:16:33 policy-pap | [2024-03-01T23:14:57.750+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:33 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"1f5da1e6-b10a-4031-a568-7543e4e07892","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"896ca6e5-fcb7-4e4a-b493-bdc4aa434dbf","timestampMs":1709334897720,"name":"apex-fdc6d847-0af6-485e-9b08-c41bc13abd4f","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:33 policy-pap | [2024-03-01T23:14:57.762+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-fdc6d847-0af6-485e-9b08-c41bc13abd4f PdpStateChange stopping 23:16:33 policy-pap | [2024-03-01T23:14:57.762+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:16:33 policy-pap | {"pdpType":"apex","state":"PASSIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"82cc1e45-516f-4977-9810-91f911494c61","responseStatus":"SUCCESS","responseMessage":"Pdp update successful."},"messageName":"PDP_STATUS","requestId":"c311dd0a-1819-449e-9e7e-2b2e2fc4e4eb","timestampMs":1709334897684,"name":"apex-fdc6d847-0af6-485e-9b08-c41bc13abd4f","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:33 policy-pap | [2024-03-01T23:14:57.762+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-fdc6d847-0af6-485e-9b08-c41bc13abd4f PdpStateChange stopping enqueue 23:16:33 policy-pap | [2024-03-01T23:14:57.762+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-fdc6d847-0af6-485e-9b08-c41bc13abd4f PdpStateChange stopping timer 23:16:33 policy-pap | [2024-03-01T23:14:57.762+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] state-change timer cancelled Timer [name=1f5da1e6-b10a-4031-a568-7543e4e07892, expireMs=1709334927706] 23:16:33 policy-pap | [2024-03-01T23:14:57.762+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-fdc6d847-0af6-485e-9b08-c41bc13abd4f PdpStateChange stopping listener 23:16:33 policy-pap | [2024-03-01T23:14:57.762+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-fdc6d847-0af6-485e-9b08-c41bc13abd4f PdpStateChange stopped 23:16:33 policy-pap | [2024-03-01T23:14:57.762+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-fdc6d847-0af6-485e-9b08-c41bc13abd4f PdpStateChange successful 23:16:33 policy-pap | [2024-03-01T23:14:57.762+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-fdc6d847-0af6-485e-9b08-c41bc13abd4f start publishing next request 23:16:33 policy-pap | [2024-03-01T23:14:57.762+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-fdc6d847-0af6-485e-9b08-c41bc13abd4f PdpUpdate starting 23:16:33 policy-pap | [2024-03-01T23:14:57.762+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-fdc6d847-0af6-485e-9b08-c41bc13abd4f PdpUpdate starting listener 23:16:33 policy-pap | [2024-03-01T23:14:57.762+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-fdc6d847-0af6-485e-9b08-c41bc13abd4f PdpUpdate starting timer 23:16:33 policy-pap | [2024-03-01T23:14:57.762+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer registered Timer [name=3aba36f4-3a85-44dd-b131-b77a9b4bfbb5, expireMs=1709334927762] 23:16:33 policy-pap | [2024-03-01T23:14:57.762+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-fdc6d847-0af6-485e-9b08-c41bc13abd4f PdpUpdate starting enqueue 23:16:33 policy-pap | [2024-03-01T23:14:57.762+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-fdc6d847-0af6-485e-9b08-c41bc13abd4f PdpUpdate started 23:16:33 policy-pap | [2024-03-01T23:14:57.763+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 82cc1e45-516f-4977-9810-91f911494c61 23:16:33 policy-pap | [2024-03-01T23:14:57.763+00:00|INFO|network|Thread-7] [OUT|KAFKA|policy-pdp-pap] 23:16:33 policy-pap | {"source":"pap-b0e01990-d32f-4dcf-a2d1-6c45f49736ab","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"3aba36f4-3a85-44dd-b131-b77a9b4bfbb5","timestampMs":1709334897738,"name":"apex-fdc6d847-0af6-485e-9b08-c41bc13abd4f","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:33 policy-pap | [2024-03-01T23:14:57.766+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:16:33 policy-db-migrator | > upgrade 0820-toscatrigger.sql 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS toscatrigger (ACTION VARCHAR(255) NULL, toscaCondition LONGBLOB DEFAULT NULL, toscaConstraint LONGBLOB DEFAULT NULL, `DESCRIPTION` VARCHAR(255) NULL, EVALUATIONS INT DEFAULT NULL, EVENTTYPE VARCHAR(255) NULL, METHOD VARCHAR(255) NULL, `PERIOD` LONGBLOB DEFAULT NULL, SCHEDULE LONGBLOB DEFAULT NULL, TARGETFILTER LONGBLOB DEFAULT NULL, parentLocalName VARCHAR(120) NOT NULL, localName VARCHAR(120) NOT NULL, parentKeyVersion VARCHAR(15) NOT NULL, parentKeyName VARCHAR(120) NOT NULL, PRIMARY KEY PK_TOSCATRIGGER (parentLocalName, localName, parentKeyVersion, parentKeyName)) 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | > upgrade 0830-FK_ToscaNodeTemplate_capabilitiesName.sql 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | CREATE INDEX FK_ToscaNodeTemplate_capabilitiesName ON toscanodetemplate(capabilitiesName, capabilitiesVersion) 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | > upgrade 0840-FK_ToscaNodeTemplate_requirementsName.sql 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | CREATE INDEX FK_ToscaNodeTemplate_requirementsName ON toscanodetemplate(requirementsName, requirementsVersion) 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | > upgrade 0850-FK_ToscaNodeType_requirementsName.sql 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | CREATE INDEX FK_ToscaNodeType_requirementsName ON toscanodetype(requirementsName, requirementsVersion) 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | > upgrade 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_capabilityTypesName ON toscaservicetemplate(capabilityTypesName, capabilityTypesVersion) 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | > upgrade 0870-FK_ToscaServiceTemplate_dataTypesName.sql 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_dataTypesName ON toscaservicetemplate(dataTypesName, dataTypesVersion) 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | > upgrade 0880-FK_ToscaServiceTemplate_nodeTypesName.sql 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_nodeTypesName ON toscaservicetemplate(nodeTypesName, nodeTypesVersion) 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | > upgrade 0890-FK_ToscaServiceTemplate_policyTypesName.sql 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_policyTypesName ON toscaservicetemplate(policyTypesName, policyTypesVersion) 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | > upgrade 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | CREATE INDEX FK_ToscaServiceTemplate_relationshipTypesName ON toscaservicetemplate(relationshipTypesName, relationshipTypesVersion) 23:16:33 kafka | [2024-03-01 23:14:36,108] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,108] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,109] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,109] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,109] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,109] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,109] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,109] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,109] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,110] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,110] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,110] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,110] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,110] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,110] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,111] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,111] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,111] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,111] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,111] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,111] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,112] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,112] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,112] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,112] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,112] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,112] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,112] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,113] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,113] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,113] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,113] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,113] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,113] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,114] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,114] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,114] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,114] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,114] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,114] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 state from NonExistentPartition to NewPartition with assigned replicas 1 (state.change.logger) 23:16:33 policy-pap | {"source":"pap-b0e01990-d32f-4dcf-a2d1-6c45f49736ab","state":"ACTIVE","messageName":"PDP_STATE_CHANGE","requestId":"1f5da1e6-b10a-4031-a568-7543e4e07892","timestampMs":1709334897618,"name":"apex-fdc6d847-0af6-485e-9b08-c41bc13abd4f","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:33 policy-pap | [2024-03-01T23:14:57.767+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_STATE_CHANGE 23:16:33 policy-pap | [2024-03-01T23:14:57.769+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:16:33 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpStateChange","policies":[],"response":{"responseTo":"1f5da1e6-b10a-4031-a568-7543e4e07892","responseStatus":"SUCCESS","responseMessage":"State changed to active. No policies found."},"messageName":"PDP_STATUS","requestId":"896ca6e5-fcb7-4e4a-b493-bdc4aa434dbf","timestampMs":1709334897720,"name":"apex-fdc6d847-0af6-485e-9b08-c41bc13abd4f","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:33 policy-pap | [2024-03-01T23:14:57.769+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 1f5da1e6-b10a-4031-a568-7543e4e07892 23:16:33 policy-pap | [2024-03-01T23:14:57.772+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:16:33 policy-pap | {"source":"pap-b0e01990-d32f-4dcf-a2d1-6c45f49736ab","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"3aba36f4-3a85-44dd-b131-b77a9b4bfbb5","timestampMs":1709334897738,"name":"apex-fdc6d847-0af6-485e-9b08-c41bc13abd4f","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:33 policy-pap | [2024-03-01T23:14:57.772+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-heartbeat] discarding event of type PDP_UPDATE 23:16:33 policy-pap | [2024-03-01T23:14:57.775+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:33 policy-pap | {"source":"pap-b0e01990-d32f-4dcf-a2d1-6c45f49736ab","pdpHeartbeatIntervalMs":120000,"policiesToBeDeployed":[],"policiesToBeUndeployed":[],"messageName":"PDP_UPDATE","requestId":"3aba36f4-3a85-44dd-b131-b77a9b4bfbb5","timestampMs":1709334897738,"name":"apex-fdc6d847-0af6-485e-9b08-c41bc13abd4f","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:33 policy-pap | [2024-03-01T23:14:57.775+00:00|INFO|MessageTypeDispatcher|KAFKA-source-policy-pdp-pap] discarding event of type PDP_UPDATE 23:16:33 policy-pap | [2024-03-01T23:14:57.780+00:00|INFO|network|KAFKA-source-policy-heartbeat] [IN|KAFKA|policy-heartbeat] 23:16:33 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"3aba36f4-3a85-44dd-b131-b77a9b4bfbb5","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"bd50a662-68c8-4d5c-b69b-8f62e11e8a9f","timestampMs":1709334897774,"name":"apex-fdc6d847-0af6-485e-9b08-c41bc13abd4f","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:33 policy-pap | [2024-03-01T23:14:57.780+00:00|INFO|RequestIdDispatcher|KAFKA-source-policy-heartbeat] no listener for request id 3aba36f4-3a85-44dd-b131-b77a9b4bfbb5 23:16:33 policy-pap | [2024-03-01T23:14:57.781+00:00|INFO|network|KAFKA-source-policy-pdp-pap] [IN|KAFKA|policy-pdp-pap] 23:16:33 policy-pap | {"pdpType":"apex","state":"ACTIVE","healthy":"HEALTHY","description":"Pdp status response message for PdpUpdate","policies":[],"response":{"responseTo":"3aba36f4-3a85-44dd-b131-b77a9b4bfbb5","responseStatus":"SUCCESS","responseMessage":"Pdp already updated"},"messageName":"PDP_STATUS","requestId":"bd50a662-68c8-4d5c-b69b-8f62e11e8a9f","timestampMs":1709334897774,"name":"apex-fdc6d847-0af6-485e-9b08-c41bc13abd4f","pdpGroup":"defaultGroup","pdpSubgroup":"apex"} 23:16:33 policy-pap | [2024-03-01T23:14:57.781+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-fdc6d847-0af6-485e-9b08-c41bc13abd4f PdpUpdate stopping 23:16:33 policy-pap | [2024-03-01T23:14:57.782+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-fdc6d847-0af6-485e-9b08-c41bc13abd4f PdpUpdate stopping enqueue 23:16:33 policy-pap | [2024-03-01T23:14:57.782+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-fdc6d847-0af6-485e-9b08-c41bc13abd4f PdpUpdate stopping timer 23:16:33 policy-pap | [2024-03-01T23:14:57.782+00:00|INFO|TimerManager|KAFKA-source-policy-pdp-pap] update timer cancelled Timer [name=3aba36f4-3a85-44dd-b131-b77a9b4bfbb5, expireMs=1709334927762] 23:16:33 policy-pap | [2024-03-01T23:14:57.782+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-fdc6d847-0af6-485e-9b08-c41bc13abd4f PdpUpdate stopping listener 23:16:33 policy-pap | [2024-03-01T23:14:57.782+00:00|INFO|ServiceManager|KAFKA-source-policy-pdp-pap] apex-fdc6d847-0af6-485e-9b08-c41bc13abd4f PdpUpdate stopped 23:16:33 policy-pap | [2024-03-01T23:14:57.785+00:00|INFO|RequestImpl|KAFKA-source-policy-pdp-pap] apex-fdc6d847-0af6-485e-9b08-c41bc13abd4f PdpUpdate successful 23:16:33 policy-pap | [2024-03-01T23:14:57.785+00:00|INFO|PdpRequests|KAFKA-source-policy-pdp-pap] apex-fdc6d847-0af6-485e-9b08-c41bc13abd4f has no more requests 23:16:33 policy-pap | [2024-03-01T23:15:03.742+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls 23:16:33 policy-pap | [2024-03-01T23:15:03.749+00:00|INFO|GsonMessageBodyHandler|pool-2-thread-1] Using GSON for REST calls 23:16:33 policy-pap | [2024-03-01T23:15:04.141+00:00|INFO|SessionData|http-nio-6969-exec-7] unknown group testGroup 23:16:33 policy-pap | [2024-03-01T23:15:04.706+00:00|INFO|SessionData|http-nio-6969-exec-7] create cached group testGroup 23:16:33 policy-pap | [2024-03-01T23:15:04.707+00:00|INFO|SessionData|http-nio-6969-exec-7] creating DB group testGroup 23:16:33 policy-pap | [2024-03-01T23:15:05.257+00:00|INFO|SessionData|http-nio-6969-exec-1] cache group testGroup 23:16:33 policy-pap | [2024-03-01T23:15:05.499+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-1] Registering a deploy for policy onap.restart.tca 1.0.0 23:16:33 policy-pap | [2024-03-01T23:15:05.592+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-1] Registering a deploy for policy operational.apex.decisionMaker 1.0.0 23:16:33 policy-pap | [2024-03-01T23:15:05.592+00:00|INFO|SessionData|http-nio-6969-exec-1] update cached group testGroup 23:16:33 policy-pap | [2024-03-01T23:15:05.593+00:00|INFO|SessionData|http-nio-6969-exec-1] updating DB group testGroup 23:16:33 policy-pap | [2024-03-01T23:15:05.608+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-1] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeA, policy=onap.restart.tca 1.0.0, action=DEPLOYMENT, timestamp=2024-03-01T23:15:05Z, user=policyadmin), PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeC, policy=operational.apex.decisionMaker 1.0.0, action=DEPLOYMENT, timestamp=2024-03-01T23:15:05Z, user=policyadmin)] 23:16:33 policy-pap | [2024-03-01T23:15:06.297+00:00|INFO|SessionData|http-nio-6969-exec-6] cache group testGroup 23:16:33 policy-pap | [2024-03-01T23:15:06.298+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-6] remove policy onap.restart.tca 1.0.0 from subgroup testGroup pdpTypeA count=0 23:16:33 policy-pap | [2024-03-01T23:15:06.298+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-6] Registering an undeploy for policy onap.restart.tca 1.0.0 23:16:33 policy-pap | [2024-03-01T23:15:06.298+00:00|INFO|SessionData|http-nio-6969-exec-6] update cached group testGroup 23:16:33 policy-pap | [2024-03-01T23:15:06.299+00:00|INFO|SessionData|http-nio-6969-exec-6] updating DB group testGroup 23:16:33 policy-pap | [2024-03-01T23:15:06.310+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-6] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeA, policy=onap.restart.tca 1.0.0, action=UNDEPLOYMENT, timestamp=2024-03-01T23:15:06Z, user=policyadmin)] 23:16:33 policy-pap | [2024-03-01T23:15:06.676+00:00|INFO|SessionData|http-nio-6969-exec-5] cache group defaultGroup 23:16:33 policy-pap | [2024-03-01T23:15:06.676+00:00|INFO|SessionData|http-nio-6969-exec-5] cache group testGroup 23:16:33 policy-pap | [2024-03-01T23:15:06.677+00:00|INFO|PdpGroupDeleteProvider|http-nio-6969-exec-5] remove policy operational.apex.decisionMaker 1.0.0 from subgroup testGroup pdpTypeC count=0 23:16:33 policy-pap | [2024-03-01T23:15:06.677+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-5] Registering an undeploy for policy operational.apex.decisionMaker 1.0.0 23:16:33 policy-pap | [2024-03-01T23:15:06.677+00:00|INFO|SessionData|http-nio-6969-exec-5] update cached group testGroup 23:16:33 policy-pap | [2024-03-01T23:15:06.677+00:00|INFO|SessionData|http-nio-6969-exec-5] updating DB group testGroup 23:16:33 policy-pap | [2024-03-01T23:15:06.688+00:00|INFO|PolicyAuditManager|http-nio-6969-exec-5] sending audit records to database: [PolicyAudit(auditId=null, pdpGroup=testGroup, pdpType=pdpTypeC, policy=operational.apex.decisionMaker 1.0.0, action=UNDEPLOYMENT, timestamp=2024-03-01T23:15:06Z, user=policyadmin)] 23:16:33 policy-pap | [2024-03-01T23:15:27.263+00:00|INFO|SessionData|http-nio-6969-exec-1] cache group testGroup 23:16:33 policy-pap | [2024-03-01T23:15:27.266+00:00|INFO|SessionData|http-nio-6969-exec-1] deleting DB group testGroup 23:16:33 policy-pap | [2024-03-01T23:15:27.632+00:00|INFO|TimerManager|Thread-9] update timer discarded (expired) Timer [name=82cc1e45-516f-4977-9810-91f911494c61, expireMs=1709334927632] 23:16:33 policy-pap | [2024-03-01T23:15:27.707+00:00|INFO|TimerManager|Thread-10] state-change timer discarded (expired) Timer [name=1f5da1e6-b10a-4031-a568-7543e4e07892, expireMs=1709334927706] 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | > upgrade 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | CREATE INDEX FK_ToscaTopologyTemplate_nodeTemplatesName ON toscatopologytemplate(nodeTemplatesName, nodeTemplatessVersion) 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | > upgrade 0920-FK_ToscaTopologyTemplate_policyName.sql 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | CREATE INDEX FK_ToscaTopologyTemplate_policyName ON toscatopologytemplate(policyName, policyVersion) 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | > upgrade 0940-PdpPolicyStatus_PdpGroup.sql 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | CREATE INDEX PdpPolicyStatus_PdpGroup ON pdppolicystatus(PDPGROUP) 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | > upgrade 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | CREATE INDEX TscaServiceTemplatetopologyTemplateParentLocalName ON toscaservicetemplate(topologyTemplateParentLocalName, topologyTemplateLocalName, topologyTemplateParentKeyVersion, topologyTemplateParentKeyName) 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | > upgrade 0960-FK_ToscaNodeTemplate_capabilitiesName.sql 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | ALTER TABLE toscanodetemplate ADD CONSTRAINT FK_ToscaNodeTemplate_capabilitiesName FOREIGN KEY (capabilitiesName, capabilitiesVersion) REFERENCES toscacapabilityassignments (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | > upgrade 0970-FK_ToscaNodeTemplate_requirementsName.sql 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | ALTER TABLE toscanodetemplate ADD CONSTRAINT FK_ToscaNodeTemplate_requirementsName FOREIGN KEY (requirementsName, requirementsVersion) REFERENCES toscarequirements (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | > upgrade 0980-FK_ToscaNodeType_requirementsName.sql 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | ALTER TABLE toscanodetype ADD CONSTRAINT FK_ToscaNodeType_requirementsName FOREIGN KEY (requirementsName, requirementsVersion) REFERENCES toscarequirements (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | > upgrade 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_capabilityTypesName FOREIGN KEY (capabilityTypesName, capabilityTypesVersion) REFERENCES toscacapabilitytypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | > upgrade 1000-FK_ToscaServiceTemplate_dataTypesName.sql 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_dataTypesName FOREIGN KEY (dataTypesName, dataTypesVersion) REFERENCES toscadatatypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | > upgrade 1010-FK_ToscaServiceTemplate_nodeTypesName.sql 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_nodeTypesName FOREIGN KEY (nodeTypesName, nodeTypesVersion) REFERENCES toscanodetypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | > upgrade 1020-FK_ToscaServiceTemplate_policyTypesName.sql 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_policyTypesName FOREIGN KEY (policyTypesName, policyTypesVersion) REFERENCES toscapolicytypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | > upgrade 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT FK_ToscaServiceTemplate_relationshipTypesName FOREIGN KEY (relationshipTypesName, relationshipTypesVersion) REFERENCES toscarelationshiptypes (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | > upgrade 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | ALTER TABLE toscatopologytemplate ADD CONSTRAINT FK_ToscaTopologyTemplate_nodeTemplatesName FOREIGN KEY (nodeTemplatesName, nodeTemplatessVersion) REFERENCES toscanodetemplates (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | > upgrade 1050-FK_ToscaTopologyTemplate_policyName.sql 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | ALTER TABLE toscatopologytemplate ADD CONSTRAINT FK_ToscaTopologyTemplate_policyName FOREIGN KEY (policyName, policyVersion) REFERENCES toscapolicies (name, version) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.691731917Z level=info msg="Migration successfully executed" id="managed folder permissions library panel actions migration" duration=297.031µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.69545319Z level=info msg="Executing migration" id="migrate external alertmanagers to datsourcse" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.695763412Z level=info msg="Migration successfully executed" id="migrate external alertmanagers to datsourcse" duration=309.842µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.699736997Z level=info msg="Executing migration" id="create folder table" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.700740592Z level=info msg="Migration successfully executed" id="create folder table" duration=1.002955ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.705967084Z level=info msg="Executing migration" id="Add index for parent_uid" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.707263233Z level=info msg="Migration successfully executed" id="Add index for parent_uid" duration=1.295299ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.710967975Z level=info msg="Executing migration" id="Add unique index for folder.uid and folder.org_id" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.712250432Z level=info msg="Migration successfully executed" id="Add unique index for folder.uid and folder.org_id" duration=1.281827ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.717264403Z level=info msg="Executing migration" id="Update folder title length" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.717288463Z level=info msg="Migration successfully executed" id="Update folder title length" duration=24.83µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.722172693Z level=info msg="Executing migration" id="Add unique index for folder.title and folder.parent_uid" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.723609142Z level=info msg="Migration successfully executed" id="Add unique index for folder.title and folder.parent_uid" duration=1.434809ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.727884658Z level=info msg="Executing migration" id="Remove unique index for folder.title and folder.parent_uid" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.729584949Z level=info msg="Migration successfully executed" id="Remove unique index for folder.title and folder.parent_uid" duration=1.700131ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.733452922Z level=info msg="Executing migration" id="Add unique index for title, parent_uid, and org_id" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.734662959Z level=info msg="Migration successfully executed" id="Add unique index for title, parent_uid, and org_id" duration=1.209737ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.739823091Z level=info msg="Executing migration" id="Sync dashboard and folder table" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.740615636Z level=info msg="Migration successfully executed" id="Sync dashboard and folder table" duration=791.535µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.74462686Z level=info msg="Executing migration" id="Remove ghost folders from the folder table" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.745151713Z level=info msg="Migration successfully executed" id="Remove ghost folders from the folder table" duration=524.433µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.749170568Z level=info msg="Executing migration" id="create anon_device table" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.750102354Z level=info msg="Migration successfully executed" id="create anon_device table" duration=931.476µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.755558107Z level=info msg="Executing migration" id="add unique index anon_device.device_id" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.757453659Z level=info msg="Migration successfully executed" id="add unique index anon_device.device_id" duration=1.895002ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.761684235Z level=info msg="Executing migration" id="add index anon_device.updated_at" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.763334425Z level=info msg="Migration successfully executed" id="add index anon_device.updated_at" duration=1.65038ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.767273609Z level=info msg="Executing migration" id="create signing_key table" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.768085603Z level=info msg="Migration successfully executed" id="create signing_key table" duration=811.645µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.772822392Z level=info msg="Executing migration" id="add unique index signing_key.key_id" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.774455053Z level=info msg="Migration successfully executed" id="add unique index signing_key.key_id" duration=1.631261ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.778358996Z level=info msg="Executing migration" id="set legacy alert migration status in kvstore" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.780083567Z level=info msg="Migration successfully executed" id="set legacy alert migration status in kvstore" duration=1.727061ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.785079637Z level=info msg="Executing migration" id="migrate record of created folders during legacy migration to kvstore" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.785736622Z level=info msg="Migration successfully executed" id="migrate record of created folders during legacy migration to kvstore" duration=654.065µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.791332976Z level=info msg="Executing migration" id="Add folder_uid for dashboard" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.802418854Z level=info msg="Migration successfully executed" id="Add folder_uid for dashboard" duration=11.082448ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.806317687Z level=info msg="Executing migration" id="Populate dashboard folder_uid column" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.807015461Z level=info msg="Migration successfully executed" id="Populate dashboard folder_uid column" duration=697.934µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.811310857Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.812603435Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title" duration=1.289338ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.817709237Z level=info msg="Executing migration" id="create sso_setting table" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.818765153Z level=info msg="Migration successfully executed" id="create sso_setting table" duration=1.055276ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.824415627Z level=info msg="Executing migration" id="copy kvstore migration status to each org" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.825713525Z level=info msg="Migration successfully executed" id="copy kvstore migration status to each org" duration=1.298798ms 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.829475888Z level=info msg="Executing migration" id="add back entry for orgid=0 migrated status" 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.830044081Z level=info msg="Migration successfully executed" id="add back entry for orgid=0 migrated status" duration=569.163µs 23:16:33 grafana | logger=migrator t=2024-03-01T23:14:01.833733104Z level=info msg="migrations completed" performed=526 skipped=0 duration=4.724682186s 23:16:33 grafana | logger=sqlstore t=2024-03-01T23:14:01.842966041Z level=info msg="Created default admin" user=admin 23:16:33 grafana | logger=sqlstore t=2024-03-01T23:14:01.843340513Z level=info msg="Created default organization" 23:16:33 grafana | logger=secrets t=2024-03-01T23:14:01.849088838Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1 23:16:33 grafana | logger=plugin.store t=2024-03-01T23:14:01.868291685Z level=info msg="Loading plugins..." 23:16:33 kafka | [2024-03-01 23:14:36,114] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,120] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NonExistentReplica to NewReplica (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,120] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NonExistentReplica to NewReplica (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,120] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NonExistentReplica to NewReplica (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,121] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NonExistentReplica to NewReplica (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,121] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NonExistentReplica to NewReplica (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,121] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NonExistentReplica to NewReplica (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,121] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NonExistentReplica to NewReplica (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,121] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NonExistentReplica to NewReplica (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,121] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NonExistentReplica to NewReplica (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,121] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NonExistentReplica to NewReplica (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,122] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NonExistentReplica to NewReplica (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,122] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NonExistentReplica to NewReplica (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,129] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NonExistentReplica to NewReplica (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,129] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NonExistentReplica to NewReplica (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,129] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NonExistentReplica to NewReplica (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,129] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NonExistentReplica to NewReplica (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,129] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NonExistentReplica to NewReplica (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,129] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NonExistentReplica to NewReplica (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,129] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NonExistentReplica to NewReplica (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,129] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NonExistentReplica to NewReplica (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,129] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NonExistentReplica to NewReplica (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,129] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NonExistentReplica to NewReplica (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,129] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NonExistentReplica to NewReplica (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,129] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NonExistentReplica to NewReplica (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,129] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NonExistentReplica to NewReplica (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,129] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NonExistentReplica to NewReplica (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,129] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NonExistentReplica to NewReplica (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,129] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NonExistentReplica to NewReplica (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,129] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NonExistentReplica to NewReplica (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,129] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NonExistentReplica to NewReplica (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,129] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NonExistentReplica to NewReplica (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,129] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NonExistentReplica to NewReplica (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,130] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NonExistentReplica to NewReplica (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,130] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NonExistentReplica to NewReplica (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,130] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NonExistentReplica to NewReplica (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,130] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NonExistentReplica to NewReplica (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,130] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NonExistentReplica to NewReplica (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,130] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NonExistentReplica to NewReplica (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,130] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NonExistentReplica to NewReplica (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,130] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NonExistentReplica to NewReplica (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,130] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NonExistentReplica to NewReplica (state.change.logger) 23:16:33 grafana | logger=local.finder t=2024-03-01T23:14:01.9035091Z level=warn msg="Skipping finding plugins as directory does not exist" path=/usr/share/grafana/plugins-bundled 23:16:33 grafana | logger=plugin.store t=2024-03-01T23:14:01.903655301Z level=info msg="Plugins loaded" count=55 duration=35.364446ms 23:16:33 grafana | logger=query_data t=2024-03-01T23:14:01.905870335Z level=info msg="Query Service initialization" 23:16:33 grafana | logger=live.push_http t=2024-03-01T23:14:01.908794063Z level=info msg="Live Push Gateway initialization" 23:16:33 grafana | logger=ngalert.migration t=2024-03-01T23:14:01.914364317Z level=info msg=Starting 23:16:33 grafana | logger=ngalert.migration orgID=1 t=2024-03-01T23:14:01.916988802Z level=info msg="Migrating alerts for organisation" 23:16:33 grafana | logger=ngalert.migration orgID=1 t=2024-03-01T23:14:01.917962899Z level=info msg="Alerts found to migrate" alerts=0 23:16:33 grafana | logger=ngalert.migration CurrentType=Legacy DesiredType=UnifiedAlerting CleanOnDowngrade=false CleanOnUpgrade=false t=2024-03-01T23:14:01.919462667Z level=info msg="Completed legacy migration" 23:16:33 grafana | logger=infra.usagestats.collector t=2024-03-01T23:14:01.954698853Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2 23:16:33 grafana | logger=provisioning.datasources t=2024-03-01T23:14:01.957137888Z level=info msg="inserting datasource from configuration" name=PolicyPrometheus uid=dkSf71fnz 23:16:33 grafana | logger=provisioning.alerting t=2024-03-01T23:14:01.97565779Z level=info msg="starting to provision alerting" 23:16:33 grafana | logger=provisioning.alerting t=2024-03-01T23:14:01.9756849Z level=info msg="finished to provision alerting" 23:16:33 grafana | logger=ngalert.state.manager t=2024-03-01T23:14:01.976022613Z level=info msg="Warming state cache for startup" 23:16:33 grafana | logger=ngalert.state.manager t=2024-03-01T23:14:01.976630637Z level=info msg="State cache has been initialized" states=0 duration=607.814µs 23:16:33 grafana | logger=ngalert.scheduler t=2024-03-01T23:14:01.976684557Z level=info msg="Starting scheduler" tickInterval=10s 23:16:33 grafana | logger=ticker t=2024-03-01T23:14:01.976743507Z level=info msg=starting first_tick=2024-03-01T23:14:10Z 23:16:33 grafana | logger=ngalert.multiorg.alertmanager t=2024-03-01T23:14:01.976763207Z level=info msg="Starting MultiOrg Alertmanager" 23:16:33 grafana | logger=grafanaStorageLogger t=2024-03-01T23:14:01.977482472Z level=info msg="Storage starting" 23:16:33 grafana | logger=http.server t=2024-03-01T23:14:01.979634835Z level=info msg="HTTP Server Listen" address=[::]:3000 protocol=http subUrl= socket= 23:16:33 grafana | logger=grafana-apiserver t=2024-03-01T23:14:01.984653786Z level=info msg="Authentication is disabled" 23:16:33 grafana | logger=sqlstore.transactions t=2024-03-01T23:14:01.989215213Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked" 23:16:33 grafana | logger=grafana-apiserver t=2024-03-01T23:14:01.993257348Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager" 23:16:33 grafana | logger=plugins.update.checker t=2024-03-01T23:14:02.098864028Z level=info msg="Update check succeeded" duration=118.810721ms 23:16:33 grafana | logger=grafana.update.checker t=2024-03-01T23:14:02.12297102Z level=info msg="Update check succeeded" duration=144.075519ms 23:16:33 grafana | logger=sqlstore.transactions t=2024-03-01T23:14:02.127331219Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=1 code="database is locked" 23:16:33 grafana | logger=sqlstore.transactions t=2024-03-01T23:14:02.174435426Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=2 code="database is locked" 23:16:33 grafana | logger=infra.usagestats t=2024-03-01T23:15:56.988974158Z level=info msg="Usage stats are ready to report" 23:16:33 policy-db-migrator | > upgrade 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | ALTER TABLE toscaservicetemplate ADD CONSTRAINT TscaServiceTemplatetopologyTemplateParentLocalName FOREIGN KEY (topologyTemplateParentLocalName, topologyTemplateLocalName, topologyTemplateParentKeyVersion, topologyTemplateParentKeyName) REFERENCES toscatopologytemplate (parentLocalName, localName, parentKeyVersion, parentKeyName) ON UPDATE RESTRICT ON DELETE RESTRICT 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | > upgrade 0100-pdp.sql 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | ALTER TABLE pdp ADD COLUMN LASTUPDATE datetime NOT NULL DEFAULT CURRENT_TIMESTAMP AFTER HEALTHY 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | > upgrade 0110-idx_tsidx1.sql 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | CREATE INDEX IDX_TSIDX1 ON pdpstatistics(timeStamp, name, version) 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | > upgrade 0120-pk_pdpstatistics.sql 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | ALTER TABLE pdpstatistics DROP PRIMARY KEY 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | > upgrade 0130-pdpstatistics.sql 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | ALTER TABLE pdpstatistics ADD COLUMN POLICYUNDEPLOYCOUNT BIGINT DEFAULT NULL AFTER POLICYEXECUTEDSUCCESSCOUNT, ADD COLUMN POLICYUNDEPLOYFAILCOUNT BIGINT DEFAULT NULL, ADD COLUMN POLICYUNDEPLOYSUCCESSCOUNT BIGINT DEFAULT NULL, ADD COLUMN ID BIGINT NOT NULL 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | > upgrade 0140-pk_pdpstatistics.sql 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | UPDATE pdpstatistics as p JOIN (SELECT name, version, timeStamp, ROW_NUMBER() OVER (ORDER BY timeStamp ASC) AS row_num FROM pdpstatistics GROUP BY name, version, timeStamp) AS t ON (p.name=t.name AND p.version=t.version AND p.timeStamp = t.timeStamp) SET p.id=t.row_num 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | ALTER TABLE pdpstatistics ADD CONSTRAINT PK_PDPSTATISTICS PRIMARY KEY (ID, name, version) 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | > upgrade 0150-pdpstatistics.sql 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | ALTER TABLE pdpstatistics MODIFY COLUMN timeStamp datetime(6) NULL 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | > upgrade 0160-jpapdpstatistics_enginestats.sql 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | ALTER TABLE jpapdpstatistics_enginestats ADD COLUMN ID BIGINT DEFAULT NULL AFTER UPTIME 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | > upgrade 0170-jpapdpstatistics_enginestats.sql 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | UPDATE jpapdpstatistics_enginestats a 23:16:33 policy-db-migrator | JOIN pdpstatistics b 23:16:33 policy-db-migrator | ON a.name = b.name AND a.version = b.version AND a.timeStamp = b.timeStamp 23:16:33 policy-db-migrator | SET a.id = b.id 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | > upgrade 0180-jpapdpstatistics_enginestats.sql 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | ALTER TABLE jpapdpstatistics_enginestats DROP COLUMN timeStamp 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | > upgrade 0190-jpapolicyaudit.sql 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS jpapolicyaudit (ACTION INT DEFAULT NULL, PDPGROUP VARCHAR(255) NULL, PDPTYPE VARCHAR(255) NULL, TIMESTAMP datetime DEFAULT NULL, USER VARCHAR(255) NULL, ID BIGINT NOT NULL, name VARCHAR(120) NOT NULL, version VARCHAR(20) NOT NULL, PRIMARY KEY PK_JPAPOLICYAUDIT (ID, name, version)) 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | > upgrade 0200-JpaPolicyAuditIndex_timestamp.sql 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | CREATE INDEX JpaPolicyAuditIndex_timestamp ON jpapolicyaudit(TIMESTAMP) 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | > upgrade 0210-sequence.sql 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | > upgrade 0220-sequence.sql 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | INSERT INTO sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM pdpstatistics)) 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | > upgrade 0100-jpatoscapolicy_targets.sql 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | ALTER TABLE jpatoscapolicy_targets ADD COLUMN toscaPolicyName VARCHAR(120) NOT NULL, ADD COLUMN toscaPolicyVersion VARCHAR(20) NOT NULL, ADD CONSTRAINT PK_JPATOSCAPOLICY_TARGETS PRIMARY KEY (toscaPolicyName, toscaPolicyVersion) 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | > upgrade 0110-jpatoscapolicytype_targets.sql 23:16:33 policy-db-migrator | -------------- 23:16:33 kafka | [2024-03-01 23:14:36,130] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NonExistentReplica to NewReplica (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,130] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NonExistentReplica to NewReplica (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,130] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NonExistentReplica to NewReplica (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,130] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NonExistentReplica to NewReplica (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,130] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NonExistentReplica to NewReplica (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,130] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NonExistentReplica to NewReplica (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,130] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NonExistentReplica to NewReplica (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,130] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NonExistentReplica to NewReplica (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,130] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NonExistentReplica to NewReplica (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,130] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NonExistentReplica to NewReplica (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,130] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,315] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-22 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,315] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-30 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,315] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-25 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,315] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-35 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,315] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-38 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,315] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-13 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,315] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-8 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,315] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-21 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,315] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-4 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,315] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-27 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,315] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-7 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,315] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-9 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,315] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-46 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,315] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-41 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,315] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-33 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,315] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-23 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,315] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-49 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,315] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-47 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:33 simulator | Policy simulator config file: /opt/app/policy/simulators/etc/mounted/simParameters.json 23:16:33 simulator | overriding logback.xml 23:16:33 simulator | 2024-03-01 23:13:56,555 INFO replacing 'HOST_NAME' with simulator in /opt/app/policy/simulators/etc/mounted/simParameters.json 23:16:33 simulator | 2024-03-01 23:13:56,610 INFO org.onap.policy.models.simulators starting 23:16:33 simulator | 2024-03-01 23:13:56,610 INFO org.onap.policy.models.simulators starting CDS gRPC Server Properties 23:16:33 simulator | 2024-03-01 23:13:56,784 INFO org.onap.policy.models.simulators starting org.onap.policy.simulators.AaiSimulatorJaxRs_RESOURCE_LOCATION 23:16:33 simulator | 2024-03-01 23:13:56,785 INFO org.onap.policy.models.simulators starting A&AI simulator 23:16:33 simulator | 2024-03-01 23:13:56,885 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-5c18016b==org.glassfish.jersey.servlet.ServletContainer@266aec30{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@45905bff{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@2a2c13a8{/,null,STOPPED}, connector=A&AI simulator@54a67a45{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-5c18016b==org.glassfish.jersey.servlet.ServletContainer@266aec30{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 23:16:33 simulator | 2024-03-01 23:13:56,895 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-5c18016b==org.glassfish.jersey.servlet.ServletContainer@266aec30{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@45905bff{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@2a2c13a8{/,null,STOPPED}, connector=A&AI simulator@54a67a45{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-5c18016b==org.glassfish.jersey.servlet.ServletContainer@266aec30{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:16:33 simulator | 2024-03-01 23:13:56,898 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-5c18016b==org.glassfish.jersey.servlet.ServletContainer@266aec30{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@45905bff{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@2a2c13a8{/,null,STOPPED}, connector=A&AI simulator@54a67a45{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=Thread[A&AI simulator-6666,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-5c18016b==org.glassfish.jersey.servlet.ServletContainer@266aec30{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:16:33 simulator | 2024-03-01 23:13:56,902 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.10+7-alpine-r0 23:16:33 simulator | 2024-03-01 23:13:56,966 INFO Session workerName=node0 23:16:33 simulator | 2024-03-01 23:13:57,432 INFO Using GSON for REST calls 23:16:33 simulator | 2024-03-01 23:13:57,502 INFO Started o.e.j.s.ServletContextHandler@2a2c13a8{/,null,AVAILABLE} 23:16:33 simulator | 2024-03-01 23:13:57,508 INFO Started A&AI simulator@54a67a45{HTTP/1.1, (http/1.1)}{0.0.0.0:6666} 23:16:33 simulator | 2024-03-01 23:13:57,515 INFO Started Server@45905bff{STARTING}[11.0.20,sto=0] @1430ms 23:16:33 simulator | 2024-03-01 23:13:57,515 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-5c18016b==org.glassfish.jersey.servlet.ServletContainer@266aec30{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=A&AI simulator, host=0.0.0.0, port=6666, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@45905bff{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@2a2c13a8{/,null,AVAILABLE}, connector=A&AI simulator@54a67a45{HTTP/1.1, (http/1.1)}{0.0.0.0:6666}, jettyThread=Thread[A&AI simulator-6666,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-5c18016b==org.glassfish.jersey.servlet.ServletContainer@266aec30{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4383 ms. 23:16:33 simulator | 2024-03-01 23:13:57,522 INFO org.onap.policy.models.simulators starting SDNC simulator 23:16:33 simulator | 2024-03-01 23:13:57,527 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-7e22550a==org.glassfish.jersey.servlet.ServletContainer@615c5c2d{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@45e37a7e{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@62452cc9{/,null,STOPPED}, connector=SDNC simulator@78fbff54{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-7e22550a==org.glassfish.jersey.servlet.ServletContainer@615c5c2d{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 23:16:33 simulator | 2024-03-01 23:13:57,528 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-7e22550a==org.glassfish.jersey.servlet.ServletContainer@615c5c2d{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@45e37a7e{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@62452cc9{/,null,STOPPED}, connector=SDNC simulator@78fbff54{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-7e22550a==org.glassfish.jersey.servlet.ServletContainer@615c5c2d{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:16:33 simulator | 2024-03-01 23:13:57,532 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-7e22550a==org.glassfish.jersey.servlet.ServletContainer@615c5c2d{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@45e37a7e{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@62452cc9{/,null,STOPPED}, connector=SDNC simulator@78fbff54{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=Thread[SDNC simulator-6668,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-7e22550a==org.glassfish.jersey.servlet.ServletContainer@615c5c2d{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:16:33 simulator | 2024-03-01 23:13:57,533 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.10+7-alpine-r0 23:16:33 simulator | 2024-03-01 23:13:57,542 INFO Session workerName=node0 23:16:33 simulator | 2024-03-01 23:13:57,610 INFO Using GSON for REST calls 23:16:33 simulator | 2024-03-01 23:13:57,619 INFO Started o.e.j.s.ServletContextHandler@62452cc9{/,null,AVAILABLE} 23:16:33 simulator | 2024-03-01 23:13:57,620 INFO Started SDNC simulator@78fbff54{HTTP/1.1, (http/1.1)}{0.0.0.0:6668} 23:16:33 simulator | 2024-03-01 23:13:57,621 INFO Started Server@45e37a7e{STARTING}[11.0.20,sto=0] @1536ms 23:16:33 policy-db-migrator | ALTER TABLE jpatoscapolicytype_targets ADD COLUMN toscaPolicyTypeName VARCHAR(120) NOT NULL, ADD COLUMN toscaPolicyTypeVersion VARCHAR(20) NOT NULL, ADD CONSTRAINT PK_JPATOSCAPOLICYTYPE_TARGETS PRIMARY KEY (toscaPolicyTypeName, toscaPolicyTypeVersion) 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | > upgrade 0120-toscatrigger.sql 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | DROP TABLE IF EXISTS toscatrigger 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | > upgrade 0130-jpatoscapolicytype_triggers.sql 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | ALTER TABLE jpatoscapolicytype_triggers MODIFY COLUMN triggers LONGBLOB 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | > upgrade 0140-toscaparameter.sql 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | DROP TABLE IF EXISTS toscaparameter 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | > upgrade 0150-toscaproperty.sql 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | DROP TABLE IF EXISTS jpatoscaproperty_constraints 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | DROP TABLE IF EXISTS jpatoscaproperty_metadata 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | DROP TABLE IF EXISTS toscaproperty 23:16:33 simulator | 2024-03-01 23:13:57,621 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-7e22550a==org.glassfish.jersey.servlet.ServletContainer@615c5c2d{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=SDNC simulator, host=0.0.0.0, port=6668, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@45e37a7e{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@62452cc9{/,null,AVAILABLE}, connector=SDNC simulator@78fbff54{HTTP/1.1, (http/1.1)}{0.0.0.0:6668}, jettyThread=Thread[SDNC simulator-6668,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-7e22550a==org.glassfish.jersey.servlet.ServletContainer@615c5c2d{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4911 ms. 23:16:33 simulator | 2024-03-01 23:13:57,623 INFO org.onap.policy.models.simulators starting SO simulator 23:16:33 simulator | 2024-03-01 23:13:57,626 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-1021f6c9==org.glassfish.jersey.servlet.ServletContainer@fe7342f2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@7516e4e5{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@488eb7f2{/,null,STOPPED}, connector=SO simulator@5a7005d{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-1021f6c9==org.glassfish.jersey.servlet.ServletContainer@fe7342f2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 23:16:33 simulator | 2024-03-01 23:13:57,627 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-1021f6c9==org.glassfish.jersey.servlet.ServletContainer@fe7342f2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@7516e4e5{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@488eb7f2{/,null,STOPPED}, connector=SO simulator@5a7005d{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-1021f6c9==org.glassfish.jersey.servlet.ServletContainer@fe7342f2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:16:33 simulator | 2024-03-01 23:13:57,628 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-1021f6c9==org.glassfish.jersey.servlet.ServletContainer@fe7342f2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@7516e4e5{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@488eb7f2{/,null,STOPPED}, connector=SO simulator@5a7005d{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=Thread[SO simulator-6669,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-1021f6c9==org.glassfish.jersey.servlet.ServletContainer@fe7342f2{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:16:33 simulator | 2024-03-01 23:13:57,629 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.10+7-alpine-r0 23:16:33 simulator | 2024-03-01 23:13:57,632 INFO Session workerName=node0 23:16:33 simulator | 2024-03-01 23:13:57,684 INFO Using GSON for REST calls 23:16:33 simulator | 2024-03-01 23:13:57,696 INFO Started o.e.j.s.ServletContextHandler@488eb7f2{/,null,AVAILABLE} 23:16:33 simulator | 2024-03-01 23:13:57,698 INFO Started SO simulator@5a7005d{HTTP/1.1, (http/1.1)}{0.0.0.0:6669} 23:16:33 simulator | 2024-03-01 23:13:57,698 INFO Started Server@7516e4e5{STARTING}[11.0.20,sto=0] @1613ms 23:16:33 simulator | 2024-03-01 23:13:57,698 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-1021f6c9==org.glassfish.jersey.servlet.ServletContainer@fe7342f2{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=SO simulator, host=0.0.0.0, port=6669, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@7516e4e5{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@488eb7f2{/,null,AVAILABLE}, connector=SO simulator@5a7005d{HTTP/1.1, (http/1.1)}{0.0.0.0:6669}, jettyThread=Thread[SO simulator-6669,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-1021f6c9==org.glassfish.jersey.servlet.ServletContainer@fe7342f2{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4930 ms. 23:16:33 simulator | 2024-03-01 23:13:57,699 INFO org.onap.policy.models.simulators starting VFC simulator 23:16:33 simulator | 2024-03-01 23:13:57,701 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-3e7634b9==org.glassfish.jersey.servlet.ServletContainer@ca975b37{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@6f0b0a5e{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@6035b93b{/,null,STOPPED}, connector=VFC simulator@4189d70b{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-3e7634b9==org.glassfish.jersey.servlet.ServletContainer@ca975b37{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: WAITED-START 23:16:33 simulator | 2024-03-01 23:13:57,701 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-3e7634b9==org.glassfish.jersey.servlet.ServletContainer@ca975b37{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@6f0b0a5e{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@6035b93b{/,null,STOPPED}, connector=VFC simulator@4189d70b{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=null, servlets={/*=org.glassfish.jersey.servlet.ServletContainer-3e7634b9==org.glassfish.jersey.servlet.ServletContainer@ca975b37{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:16:33 simulator | 2024-03-01 23:13:57,703 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-3e7634b9==org.glassfish.jersey.servlet.ServletContainer@ca975b37{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@6f0b0a5e{STOPPED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@6035b93b{/,null,STOPPED}, connector=VFC simulator@4189d70b{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=Thread[VFC simulator-6670,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-3e7634b9==org.glassfish.jersey.servlet.ServletContainer@ca975b37{jsp=null,order=0,inst=false,async=true,src=EMBEDDED:null,STOPPED}})]: STARTING 23:16:33 simulator | 2024-03-01 23:13:57,703 INFO jetty-11.0.20; built: 2024-01-29T21:04:22.394Z; git: 922f8dc188f7011e60d0361de585fd4ac4d63064; jvm 17.0.10+7-alpine-r0 23:16:33 simulator | 2024-03-01 23:13:57,706 INFO Session workerName=node0 23:16:33 simulator | 2024-03-01 23:13:57,754 INFO Using GSON for REST calls 23:16:33 simulator | 2024-03-01 23:13:57,763 INFO Started o.e.j.s.ServletContextHandler@6035b93b{/,null,AVAILABLE} 23:16:33 simulator | 2024-03-01 23:13:57,764 INFO Started VFC simulator@4189d70b{HTTP/1.1, (http/1.1)}{0.0.0.0:6670} 23:16:33 simulator | 2024-03-01 23:13:57,764 INFO Started Server@6f0b0a5e{STARTING}[11.0.20,sto=0] @1679ms 23:16:33 simulator | 2024-03-01 23:13:57,764 INFO JettyJerseyServer [Jerseyservlets={/*=org.glassfish.jersey.servlet.ServletContainer-3e7634b9==org.glassfish.jersey.servlet.ServletContainer@ca975b37{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}}, swaggerId=null, toString()=JettyServletServer(name=VFC simulator, host=0.0.0.0, port=6670, sniHostCheck=false, user=null, password=null, contextPath=/, jettyServer=Server@6f0b0a5e{STARTED}[11.0.20,sto=0], context=o.e.j.s.ServletContextHandler@6035b93b{/,null,AVAILABLE}, connector=VFC simulator@4189d70b{HTTP/1.1, (http/1.1)}{0.0.0.0:6670}, jettyThread=Thread[VFC simulator-6670,5,main], servlets={/*=org.glassfish.jersey.servlet.ServletContainer-3e7634b9==org.glassfish.jersey.servlet.ServletContainer@ca975b37{jsp=null,order=0,inst=true,async=true,src=EMBEDDED:null,STARTED}})]: pending time is 4939 ms. 23:16:33 simulator | 2024-03-01 23:13:57,765 INFO org.onap.policy.models.simulators started 23:16:33 kafka | [2024-03-01 23:14:36,316] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-16 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,316] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-28 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,316] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-31 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,316] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-36 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,316] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-42 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,316] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-3 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,316] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-18 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,316] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-37 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,316] INFO [Controller id=1 epoch=1] Changed partition policy-pdp-pap-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,316] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-15 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,316] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-24 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,316] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-17 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,316] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-48 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,316] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-19 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,316] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-11 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,316] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-2 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,316] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-43 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,316] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-6 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,316] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-14 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,316] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-20 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,316] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-0 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,316] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-44 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,316] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-39 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,316] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-12 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | > upgrade 0160-jpapolicyaudit_pk.sql 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | ALTER TABLE jpapolicyaudit DROP PRIMARY KEY 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | ALTER TABLE jpapolicyaudit ADD CONSTRAINT PK_JPAPOLICYAUDIT PRIMARY KEY (ID) 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | > upgrade 0170-pdpstatistics_pk.sql 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | ALTER TABLE pdpstatistics DROP PRIMARY KEY 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | ALTER TABLE pdpstatistics ADD CONSTRAINT PK_PDPSTATISTICS PRIMARY KEY (ID) 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | > upgrade 0180-jpatoscanodetemplate_metadata.sql 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | ALTER TABLE jpatoscanodetemplate_metadata MODIFY COLUMN METADATA LONGTEXT 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | > upgrade 0100-upgrade.sql 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | select 'upgrade to 1100 completed' as msg 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | msg 23:16:33 policy-db-migrator | upgrade to 1100 completed 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | > upgrade 0100-jpapolicyaudit_renameuser.sql 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | ALTER TABLE jpapolicyaudit RENAME COLUMN USER TO USERNAME 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | > upgrade 0110-idx_tsidx1.sql 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | DROP INDEX IDX_TSIDX1 ON pdpstatistics 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | CREATE INDEX IDXTSIDX1 ON pdpstatistics(timeStamp, name, version) 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | > upgrade 0120-audit_sequence.sql 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS audit_sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | INSERT INTO audit_sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM jpapolicyaudit)) 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | > upgrade 0130-statistics_sequence.sql 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | CREATE TABLE IF NOT EXISTS statistics_sequence (SEQ_NAME VARCHAR(50) NOT NULL, SEQ_COUNT DECIMAL(38) DEFAULT NULL, PRIMARY KEY PK_SEQUENCE (SEQ_NAME)) 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | INSERT INTO statistics_sequence(SEQ_NAME, SEQ_COUNT) VALUES('SEQ_GEN', (SELECT IFNULL(max(id),0) FROM pdpstatistics)) 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | TRUNCATE TABLE sequence 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | > upgrade 0100-pdpstatistics.sql 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | DROP INDEX IDXTSIDX1 ON pdpstatistics 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | DROP TABLE pdpstatistics 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | > upgrade 0110-jpapdpstatistics_enginestats.sql 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | DROP TABLE jpapdpstatistics_enginestats 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | > upgrade 0120-statistics_sequence.sql 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | DROP TABLE statistics_sequence 23:16:33 policy-db-migrator | -------------- 23:16:33 policy-db-migrator | 23:16:33 policy-db-migrator | policyadmin: OK: upgrade (1300) 23:16:33 policy-db-migrator | name version 23:16:33 policy-db-migrator | policyadmin 1300 23:16:33 policy-db-migrator | ID script operation from_version to_version tag success atTime 23:16:33 policy-db-migrator | 1 0100-jpapdpgroup_properties.sql upgrade 0 0800 0103242314050800u 1 2024-03-01 23:14:05 23:16:33 policy-db-migrator | 2 0110-jpapdpstatistics_enginestats.sql upgrade 0 0800 0103242314050800u 1 2024-03-01 23:14:05 23:16:33 policy-db-migrator | 3 0120-jpapdpsubgroup_policies.sql upgrade 0 0800 0103242314050800u 1 2024-03-01 23:14:05 23:16:33 policy-db-migrator | 4 0130-jpapdpsubgroup_properties.sql upgrade 0 0800 0103242314050800u 1 2024-03-01 23:14:05 23:16:33 policy-db-migrator | 5 0140-jpapdpsubgroup_supportedpolicytypes.sql upgrade 0 0800 0103242314050800u 1 2024-03-01 23:14:05 23:16:33 policy-db-migrator | 6 0150-jpatoscacapabilityassignment_attributes.sql upgrade 0 0800 0103242314050800u 1 2024-03-01 23:14:05 23:16:33 policy-db-migrator | 7 0160-jpatoscacapabilityassignment_metadata.sql upgrade 0 0800 0103242314050800u 1 2024-03-01 23:14:05 23:16:33 policy-db-migrator | 8 0170-jpatoscacapabilityassignment_occurrences.sql upgrade 0 0800 0103242314050800u 1 2024-03-01 23:14:05 23:16:33 policy-db-migrator | 9 0180-jpatoscacapabilityassignment_properties.sql upgrade 0 0800 0103242314050800u 1 2024-03-01 23:14:05 23:16:33 policy-db-migrator | 10 0190-jpatoscacapabilitytype_metadata.sql upgrade 0 0800 0103242314050800u 1 2024-03-01 23:14:05 23:16:33 policy-db-migrator | 11 0200-jpatoscacapabilitytype_properties.sql upgrade 0 0800 0103242314050800u 1 2024-03-01 23:14:05 23:16:33 policy-db-migrator | 12 0210-jpatoscadatatype_constraints.sql upgrade 0 0800 0103242314050800u 1 2024-03-01 23:14:05 23:16:33 policy-db-migrator | 13 0220-jpatoscadatatype_metadata.sql upgrade 0 0800 0103242314050800u 1 2024-03-01 23:14:05 23:16:33 policy-db-migrator | 14 0230-jpatoscadatatype_properties.sql upgrade 0 0800 0103242314050800u 1 2024-03-01 23:14:05 23:16:33 policy-db-migrator | 15 0240-jpatoscanodetemplate_metadata.sql upgrade 0 0800 0103242314050800u 1 2024-03-01 23:14:05 23:16:33 policy-db-migrator | 16 0250-jpatoscanodetemplate_properties.sql upgrade 0 0800 0103242314050800u 1 2024-03-01 23:14:06 23:16:33 policy-db-migrator | 17 0260-jpatoscanodetype_metadata.sql upgrade 0 0800 0103242314050800u 1 2024-03-01 23:14:06 23:16:33 policy-db-migrator | 18 0270-jpatoscanodetype_properties.sql upgrade 0 0800 0103242314050800u 1 2024-03-01 23:14:06 23:16:33 policy-db-migrator | 19 0280-jpatoscapolicy_metadata.sql upgrade 0 0800 0103242314050800u 1 2024-03-01 23:14:06 23:16:33 policy-db-migrator | 20 0290-jpatoscapolicy_properties.sql upgrade 0 0800 0103242314050800u 1 2024-03-01 23:14:06 23:16:33 policy-db-migrator | 21 0300-jpatoscapolicy_targets.sql upgrade 0 0800 0103242314050800u 1 2024-03-01 23:14:06 23:16:33 policy-db-migrator | 22 0310-jpatoscapolicytype_metadata.sql upgrade 0 0800 0103242314050800u 1 2024-03-01 23:14:06 23:16:33 policy-db-migrator | 23 0320-jpatoscapolicytype_properties.sql upgrade 0 0800 0103242314050800u 1 2024-03-01 23:14:06 23:16:33 policy-db-migrator | 24 0330-jpatoscapolicytype_targets.sql upgrade 0 0800 0103242314050800u 1 2024-03-01 23:14:06 23:16:33 policy-db-migrator | 25 0340-jpatoscapolicytype_triggers.sql upgrade 0 0800 0103242314050800u 1 2024-03-01 23:14:06 23:16:33 policy-db-migrator | 26 0350-jpatoscaproperty_constraints.sql upgrade 0 0800 0103242314050800u 1 2024-03-01 23:14:06 23:16:33 policy-db-migrator | 27 0360-jpatoscaproperty_metadata.sql upgrade 0 0800 0103242314050800u 1 2024-03-01 23:14:06 23:16:33 policy-db-migrator | 28 0370-jpatoscarelationshiptype_metadata.sql upgrade 0 0800 0103242314050800u 1 2024-03-01 23:14:06 23:16:33 policy-db-migrator | 29 0380-jpatoscarelationshiptype_properties.sql upgrade 0 0800 0103242314050800u 1 2024-03-01 23:14:06 23:16:33 policy-db-migrator | 30 0390-jpatoscarequirement_metadata.sql upgrade 0 0800 0103242314050800u 1 2024-03-01 23:14:06 23:16:33 policy-db-migrator | 31 0400-jpatoscarequirement_occurrences.sql upgrade 0 0800 0103242314050800u 1 2024-03-01 23:14:06 23:16:33 policy-db-migrator | 32 0410-jpatoscarequirement_properties.sql upgrade 0 0800 0103242314050800u 1 2024-03-01 23:14:06 23:16:33 policy-db-migrator | 33 0420-jpatoscaservicetemplate_metadata.sql upgrade 0 0800 0103242314050800u 1 2024-03-01 23:14:06 23:16:33 policy-db-migrator | 34 0430-jpatoscatopologytemplate_inputs.sql upgrade 0 0800 0103242314050800u 1 2024-03-01 23:14:06 23:16:33 policy-db-migrator | 35 0440-pdpgroup_pdpsubgroup.sql upgrade 0 0800 0103242314050800u 1 2024-03-01 23:14:06 23:16:33 policy-db-migrator | 36 0450-pdpgroup.sql upgrade 0 0800 0103242314050800u 1 2024-03-01 23:14:06 23:16:33 policy-db-migrator | 37 0460-pdppolicystatus.sql upgrade 0 0800 0103242314050800u 1 2024-03-01 23:14:06 23:16:33 policy-db-migrator | 38 0470-pdp.sql upgrade 0 0800 0103242314050800u 1 2024-03-01 23:14:06 23:16:33 policy-db-migrator | 39 0480-pdpstatistics.sql upgrade 0 0800 0103242314050800u 1 2024-03-01 23:14:06 23:16:33 policy-db-migrator | 40 0490-pdpsubgroup_pdp.sql upgrade 0 0800 0103242314050800u 1 2024-03-01 23:14:06 23:16:33 policy-db-migrator | 41 0500-pdpsubgroup.sql upgrade 0 0800 0103242314050800u 1 2024-03-01 23:14:06 23:16:33 policy-db-migrator | 42 0510-toscacapabilityassignment.sql upgrade 0 0800 0103242314050800u 1 2024-03-01 23:14:07 23:16:33 policy-db-migrator | 43 0520-toscacapabilityassignments.sql upgrade 0 0800 0103242314050800u 1 2024-03-01 23:14:07 23:16:33 policy-db-migrator | 44 0530-toscacapabilityassignments_toscacapabilityassignment.sql upgrade 0 0800 0103242314050800u 1 2024-03-01 23:14:07 23:16:33 policy-db-migrator | 45 0540-toscacapabilitytype.sql upgrade 0 0800 0103242314050800u 1 2024-03-01 23:14:07 23:16:33 policy-db-migrator | 46 0550-toscacapabilitytypes.sql upgrade 0 0800 0103242314050800u 1 2024-03-01 23:14:07 23:16:33 policy-db-migrator | 47 0560-toscacapabilitytypes_toscacapabilitytype.sql upgrade 0 0800 0103242314050800u 1 2024-03-01 23:14:07 23:16:33 policy-db-migrator | 48 0570-toscadatatype.sql upgrade 0 0800 0103242314050800u 1 2024-03-01 23:14:07 23:16:33 policy-db-migrator | 49 0580-toscadatatypes.sql upgrade 0 0800 0103242314050800u 1 2024-03-01 23:14:07 23:16:33 policy-db-migrator | 50 0590-toscadatatypes_toscadatatype.sql upgrade 0 0800 0103242314050800u 1 2024-03-01 23:14:07 23:16:33 policy-db-migrator | 51 0600-toscanodetemplate.sql upgrade 0 0800 0103242314050800u 1 2024-03-01 23:14:07 23:16:33 policy-db-migrator | 52 0610-toscanodetemplates.sql upgrade 0 0800 0103242314050800u 1 2024-03-01 23:14:07 23:16:33 policy-db-migrator | 53 0620-toscanodetemplates_toscanodetemplate.sql upgrade 0 0800 0103242314050800u 1 2024-03-01 23:14:07 23:16:33 policy-db-migrator | 54 0630-toscanodetype.sql upgrade 0 0800 0103242314050800u 1 2024-03-01 23:14:07 23:16:33 policy-db-migrator | 55 0640-toscanodetypes.sql upgrade 0 0800 0103242314050800u 1 2024-03-01 23:14:07 23:16:33 policy-db-migrator | 56 0650-toscanodetypes_toscanodetype.sql upgrade 0 0800 0103242314050800u 1 2024-03-01 23:14:07 23:16:33 policy-db-migrator | 57 0660-toscaparameter.sql upgrade 0 0800 0103242314050800u 1 2024-03-01 23:14:07 23:16:33 policy-db-migrator | 58 0670-toscapolicies.sql upgrade 0 0800 0103242314050800u 1 2024-03-01 23:14:07 23:16:33 policy-db-migrator | 59 0680-toscapolicies_toscapolicy.sql upgrade 0 0800 0103242314050800u 1 2024-03-01 23:14:07 23:16:33 policy-db-migrator | 60 0690-toscapolicy.sql upgrade 0 0800 0103242314050800u 1 2024-03-01 23:14:07 23:16:33 policy-db-migrator | 61 0700-toscapolicytype.sql upgrade 0 0800 0103242314050800u 1 2024-03-01 23:14:07 23:16:33 policy-db-migrator | 62 0710-toscapolicytypes.sql upgrade 0 0800 0103242314050800u 1 2024-03-01 23:14:07 23:16:33 policy-db-migrator | 63 0720-toscapolicytypes_toscapolicytype.sql upgrade 0 0800 0103242314050800u 1 2024-03-01 23:14:07 23:16:33 policy-db-migrator | 64 0730-toscaproperty.sql upgrade 0 0800 0103242314050800u 1 2024-03-01 23:14:07 23:16:33 policy-db-migrator | 65 0740-toscarelationshiptype.sql upgrade 0 0800 0103242314050800u 1 2024-03-01 23:14:07 23:16:33 policy-db-migrator | 66 0750-toscarelationshiptypes.sql upgrade 0 0800 0103242314050800u 1 2024-03-01 23:14:08 23:16:33 policy-db-migrator | 67 0760-toscarelationshiptypes_toscarelationshiptype.sql upgrade 0 0800 0103242314050800u 1 2024-03-01 23:14:08 23:16:33 policy-db-migrator | 68 0770-toscarequirement.sql upgrade 0 0800 0103242314050800u 1 2024-03-01 23:14:08 23:16:33 policy-db-migrator | 69 0780-toscarequirements.sql upgrade 0 0800 0103242314050800u 1 2024-03-01 23:14:08 23:16:33 policy-db-migrator | 70 0790-toscarequirements_toscarequirement.sql upgrade 0 0800 0103242314050800u 1 2024-03-01 23:14:08 23:16:33 policy-db-migrator | 71 0800-toscaservicetemplate.sql upgrade 0 0800 0103242314050800u 1 2024-03-01 23:14:08 23:16:33 policy-db-migrator | 72 0810-toscatopologytemplate.sql upgrade 0 0800 0103242314050800u 1 2024-03-01 23:14:08 23:16:33 policy-db-migrator | 73 0820-toscatrigger.sql upgrade 0 0800 0103242314050800u 1 2024-03-01 23:14:08 23:16:33 policy-db-migrator | 74 0830-FK_ToscaNodeTemplate_capabilitiesName.sql upgrade 0 0800 0103242314050800u 1 2024-03-01 23:14:08 23:16:33 policy-db-migrator | 75 0840-FK_ToscaNodeTemplate_requirementsName.sql upgrade 0 0800 0103242314050800u 1 2024-03-01 23:14:08 23:16:33 policy-db-migrator | 76 0850-FK_ToscaNodeType_requirementsName.sql upgrade 0 0800 0103242314050800u 1 2024-03-01 23:14:08 23:16:33 policy-db-migrator | 77 0860-FK_ToscaServiceTemplate_capabilityTypesName.sql upgrade 0 0800 0103242314050800u 1 2024-03-01 23:14:08 23:16:33 policy-db-migrator | 78 0870-FK_ToscaServiceTemplate_dataTypesName.sql upgrade 0 0800 0103242314050800u 1 2024-03-01 23:14:08 23:16:33 policy-db-migrator | 79 0880-FK_ToscaServiceTemplate_nodeTypesName.sql upgrade 0 0800 0103242314050800u 1 2024-03-01 23:14:08 23:16:33 policy-db-migrator | 80 0890-FK_ToscaServiceTemplate_policyTypesName.sql upgrade 0 0800 0103242314050800u 1 2024-03-01 23:14:08 23:16:33 policy-db-migrator | 81 0900-FK_ToscaServiceTemplate_relationshipTypesName.sql upgrade 0 0800 0103242314050800u 1 2024-03-01 23:14:08 23:16:33 policy-db-migrator | 82 0910-FK_ToscaTopologyTemplate_nodeTemplatesName.sql upgrade 0 0800 0103242314050800u 1 2024-03-01 23:14:08 23:16:33 policy-db-migrator | 83 0920-FK_ToscaTopologyTemplate_policyName.sql upgrade 0 0800 0103242314050800u 1 2024-03-01 23:14:08 23:16:33 policy-db-migrator | 84 0940-PdpPolicyStatus_PdpGroup.sql upgrade 0 0800 0103242314050800u 1 2024-03-01 23:14:08 23:16:33 policy-db-migrator | 85 0950-TscaServiceTemplatetopologyTemplateParentLocalName.sql upgrade 0 0800 0103242314050800u 1 2024-03-01 23:14:08 23:16:33 policy-db-migrator | 86 0960-FK_ToscaNodeTemplate_capabilitiesName.sql upgrade 0 0800 0103242314050800u 1 2024-03-01 23:14:08 23:16:33 policy-db-migrator | 87 0970-FK_ToscaNodeTemplate_requirementsName.sql upgrade 0 0800 0103242314050800u 1 2024-03-01 23:14:08 23:16:33 policy-db-migrator | 88 0980-FK_ToscaNodeType_requirementsName.sql upgrade 0 0800 0103242314050800u 1 2024-03-01 23:14:09 23:16:33 policy-db-migrator | 89 0990-FK_ToscaServiceTemplate_capabilityTypesName.sql upgrade 0 0800 0103242314050800u 1 2024-03-01 23:14:09 23:16:33 policy-db-migrator | 90 1000-FK_ToscaServiceTemplate_dataTypesName.sql upgrade 0 0800 0103242314050800u 1 2024-03-01 23:14:09 23:16:33 policy-db-migrator | 91 1010-FK_ToscaServiceTemplate_nodeTypesName.sql upgrade 0 0800 0103242314050800u 1 2024-03-01 23:14:09 23:16:33 policy-db-migrator | 92 1020-FK_ToscaServiceTemplate_policyTypesName.sql upgrade 0 0800 0103242314050800u 1 2024-03-01 23:14:09 23:16:33 policy-db-migrator | 93 1030-FK_ToscaServiceTemplate_relationshipTypesName.sql upgrade 0 0800 0103242314050800u 1 2024-03-01 23:14:09 23:16:33 policy-db-migrator | 94 1040-FK_ToscaTopologyTemplate_nodeTemplatesName.sql upgrade 0 0800 0103242314050800u 1 2024-03-01 23:14:09 23:16:33 policy-db-migrator | 95 1050-FK_ToscaTopologyTemplate_policyName.sql upgrade 0 0800 0103242314050800u 1 2024-03-01 23:14:09 23:16:33 policy-db-migrator | 96 1060-TscaServiceTemplatetopologyTemplateParentLocalName.sql upgrade 0 0800 0103242314050800u 1 2024-03-01 23:14:09 23:16:33 policy-db-migrator | 97 0100-pdp.sql upgrade 0800 0900 0103242314050900u 1 2024-03-01 23:14:09 23:16:33 policy-db-migrator | 98 0110-idx_tsidx1.sql upgrade 0800 0900 0103242314050900u 1 2024-03-01 23:14:09 23:16:33 policy-db-migrator | 99 0120-pk_pdpstatistics.sql upgrade 0800 0900 0103242314050900u 1 2024-03-01 23:14:09 23:16:33 policy-db-migrator | 100 0130-pdpstatistics.sql upgrade 0800 0900 0103242314050900u 1 2024-03-01 23:14:09 23:16:33 policy-db-migrator | 101 0140-pk_pdpstatistics.sql upgrade 0800 0900 0103242314050900u 1 2024-03-01 23:14:09 23:16:33 policy-db-migrator | 102 0150-pdpstatistics.sql upgrade 0800 0900 0103242314050900u 1 2024-03-01 23:14:09 23:16:33 policy-db-migrator | 103 0160-jpapdpstatistics_enginestats.sql upgrade 0800 0900 0103242314050900u 1 2024-03-01 23:14:09 23:16:33 policy-db-migrator | 104 0170-jpapdpstatistics_enginestats.sql upgrade 0800 0900 0103242314050900u 1 2024-03-01 23:14:09 23:16:33 policy-db-migrator | 105 0180-jpapdpstatistics_enginestats.sql upgrade 0800 0900 0103242314050900u 1 2024-03-01 23:14:09 23:16:33 policy-db-migrator | 106 0190-jpapolicyaudit.sql upgrade 0800 0900 0103242314050900u 1 2024-03-01 23:14:09 23:16:33 policy-db-migrator | 107 0200-JpaPolicyAuditIndex_timestamp.sql upgrade 0800 0900 0103242314050900u 1 2024-03-01 23:14:10 23:16:33 policy-db-migrator | 108 0210-sequence.sql upgrade 0800 0900 0103242314050900u 1 2024-03-01 23:14:10 23:16:33 policy-db-migrator | 109 0220-sequence.sql upgrade 0800 0900 0103242314050900u 1 2024-03-01 23:14:10 23:16:33 policy-db-migrator | 110 0100-jpatoscapolicy_targets.sql upgrade 0900 1000 0103242314051000u 1 2024-03-01 23:14:10 23:16:33 policy-db-migrator | 111 0110-jpatoscapolicytype_targets.sql upgrade 0900 1000 0103242314051000u 1 2024-03-01 23:14:10 23:16:33 policy-db-migrator | 112 0120-toscatrigger.sql upgrade 0900 1000 0103242314051000u 1 2024-03-01 23:14:10 23:16:33 policy-db-migrator | 113 0130-jpatoscapolicytype_triggers.sql upgrade 0900 1000 0103242314051000u 1 2024-03-01 23:14:10 23:16:33 policy-db-migrator | 114 0140-toscaparameter.sql upgrade 0900 1000 0103242314051000u 1 2024-03-01 23:14:10 23:16:33 policy-db-migrator | 115 0150-toscaproperty.sql upgrade 0900 1000 0103242314051000u 1 2024-03-01 23:14:10 23:16:33 policy-db-migrator | 116 0160-jpapolicyaudit_pk.sql upgrade 0900 1000 0103242314051000u 1 2024-03-01 23:14:10 23:16:33 policy-db-migrator | 117 0170-pdpstatistics_pk.sql upgrade 0900 1000 0103242314051000u 1 2024-03-01 23:14:10 23:16:33 policy-db-migrator | 118 0180-jpatoscanodetemplate_metadata.sql upgrade 0900 1000 0103242314051000u 1 2024-03-01 23:14:10 23:16:33 policy-db-migrator | 119 0100-upgrade.sql upgrade 1000 1100 0103242314051100u 1 2024-03-01 23:14:10 23:16:33 policy-db-migrator | 120 0100-jpapolicyaudit_renameuser.sql upgrade 1100 1200 0103242314051200u 1 2024-03-01 23:14:10 23:16:33 policy-db-migrator | 121 0110-idx_tsidx1.sql upgrade 1100 1200 0103242314051200u 1 2024-03-01 23:14:10 23:16:33 policy-db-migrator | 122 0120-audit_sequence.sql upgrade 1100 1200 0103242314051200u 1 2024-03-01 23:14:10 23:16:33 policy-db-migrator | 123 0130-statistics_sequence.sql upgrade 1100 1200 0103242314051200u 1 2024-03-01 23:14:10 23:16:33 policy-db-migrator | 124 0100-pdpstatistics.sql upgrade 1200 1300 0103242314051300u 1 2024-03-01 23:14:10 23:16:33 policy-db-migrator | 125 0110-jpapdpstatistics_enginestats.sql upgrade 1200 1300 0103242314051300u 1 2024-03-01 23:14:10 23:16:33 policy-db-migrator | 126 0120-statistics_sequence.sql upgrade 1200 1300 0103242314051300u 1 2024-03-01 23:14:11 23:16:33 policy-db-migrator | policyadmin: OK @ 1300 23:16:33 kafka | [2024-03-01 23:14:36,316] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-45 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,316] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-1 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,316] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-5 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,317] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-26 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,317] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-29 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,317] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-34 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,317] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-10 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,317] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-32 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,317] INFO [Controller id=1 epoch=1] Changed partition __consumer_offsets-40 from NewPartition to OnlinePartition with state LeaderAndIsr(leader=1, leaderEpoch=0, isrWithBrokerEpoch=List(BrokerState(brokerId=1, brokerEpoch=-1)), leaderRecoveryState=RECOVERED, partitionEpoch=0) (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,322] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-13 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,322] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-46 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,322] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-9 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,322] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-42 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,322] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-21 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,322] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-17 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,322] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-30 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,322] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-26 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,322] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-5 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,322] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-38 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,322] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-1 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,323] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-34 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,323] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-16 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,323] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-45 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,323] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-12 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,323] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-41 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,323] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-24 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,323] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-20 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,323] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-49 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,323] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-0 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,323] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-29 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,323] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-25 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,323] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-8 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,323] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-37 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,323] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-4 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,323] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-33 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,323] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-15 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,323] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-48 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,323] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-11 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,323] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-44 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,323] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-23 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,323] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-19 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,323] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-32 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,323] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-28 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,323] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-7 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,323] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-40 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,323] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-3 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,323] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-36 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,323] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-47 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,323] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-14 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,324] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-43 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,324] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-10 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,324] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-22 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,324] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-18 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,324] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-31 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,324] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-27 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,324] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-39 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,324] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-6 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,324] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-35 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,324] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition policy-pdp-pap-0 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,324] TRACE [Controller id=1 epoch=1] Sending become-leader LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) to broker 1 for partition __consumer_offsets-2 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,326] INFO [Controller id=1 epoch=1] Sending LeaderAndIsr request to broker 1 with 51 become-leader and 0 become-follower partitions (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,338] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet(1) for 51 partitions (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,342] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-32 from NewReplica to OnlineReplica (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,342] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-5 from NewReplica to OnlineReplica (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,342] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-44 from NewReplica to OnlineReplica (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,342] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-48 from NewReplica to OnlineReplica (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,342] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-46 from NewReplica to OnlineReplica (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,342] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-20 from NewReplica to OnlineReplica (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,342] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition policy-pdp-pap-0 from NewReplica to OnlineReplica (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,342] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-43 from NewReplica to OnlineReplica (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,342] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-24 from NewReplica to OnlineReplica (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,342] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-6 from NewReplica to OnlineReplica (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,342] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-18 from NewReplica to OnlineReplica (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,342] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-21 from NewReplica to OnlineReplica (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,342] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-1 from NewReplica to OnlineReplica (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,342] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-14 from NewReplica to OnlineReplica (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,342] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-34 from NewReplica to OnlineReplica (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,342] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-16 from NewReplica to OnlineReplica (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,342] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-29 from NewReplica to OnlineReplica (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,342] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-11 from NewReplica to OnlineReplica (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,342] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-0 from NewReplica to OnlineReplica (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,342] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-22 from NewReplica to OnlineReplica (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,342] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-47 from NewReplica to OnlineReplica (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,342] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-36 from NewReplica to OnlineReplica (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,342] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-28 from NewReplica to OnlineReplica (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,342] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-42 from NewReplica to OnlineReplica (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,343] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-9 from NewReplica to OnlineReplica (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,343] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-37 from NewReplica to OnlineReplica (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,343] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-13 from NewReplica to OnlineReplica (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,343] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-30 from NewReplica to OnlineReplica (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,343] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-35 from NewReplica to OnlineReplica (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,343] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-39 from NewReplica to OnlineReplica (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,343] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-12 from NewReplica to OnlineReplica (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,343] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-27 from NewReplica to OnlineReplica (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,343] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-45 from NewReplica to OnlineReplica (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,343] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-19 from NewReplica to OnlineReplica (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,343] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-49 from NewReplica to OnlineReplica (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,343] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-40 from NewReplica to OnlineReplica (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,343] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-41 from NewReplica to OnlineReplica (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,343] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-38 from NewReplica to OnlineReplica (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,343] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-8 from NewReplica to OnlineReplica (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,343] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-7 from NewReplica to OnlineReplica (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,343] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-33 from NewReplica to OnlineReplica (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,343] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-25 from NewReplica to OnlineReplica (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,343] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-31 from NewReplica to OnlineReplica (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,343] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-23 from NewReplica to OnlineReplica (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,343] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-10 from NewReplica to OnlineReplica (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,343] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-2 from NewReplica to OnlineReplica (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,343] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-17 from NewReplica to OnlineReplica (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,343] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-4 from NewReplica to OnlineReplica (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,343] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-15 from NewReplica to OnlineReplica (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,343] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-26 from NewReplica to OnlineReplica (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,343] TRACE [Controller id=1 epoch=1] Changed state of replica 1 for partition __consumer_offsets-3 from NewReplica to OnlineReplica (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,344] INFO [Controller id=1 epoch=1] Sending UpdateMetadata request to brokers HashSet() for 0 partitions (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,348] INFO [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 for 51 partitions (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,349] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,349] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,349] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,349] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,349] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,349] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,349] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,349] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,349] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,349] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,349] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,350] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,350] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,350] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,350] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,350] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,350] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,350] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,350] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,350] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,351] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,351] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,351] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,351] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,351] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,351] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,351] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,351] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,351] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,351] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,352] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,352] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,352] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,352] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,352] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,352] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,352] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,352] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,352] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,352] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,353] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,353] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,353] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,353] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,353] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,353] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,353] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,353] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,353] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,353] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,354] TRACE [Broker id=1] Received LeaderAndIsr request LeaderAndIsrPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], partitionEpoch=0, replicas=[1], addingReplicas=[], removingReplicas=[], isNew=true, leaderRecoveryState=0) correlation id 1 from controller 1 epoch 1 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,394] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-3 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,394] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-18 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,394] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-41 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,394] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-10 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,394] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-33 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,394] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-48 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,394] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-19 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,394] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-34 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,394] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-4 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,394] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-11 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,394] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-26 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,394] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-49 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,394] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-39 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,394] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-9 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,394] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-24 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,394] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-31 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,394] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-46 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,394] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-1 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,394] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-16 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,394] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-2 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,394] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-25 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,394] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-40 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,394] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-47 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,394] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-17 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,394] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-32 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,395] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-37 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,395] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-7 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,395] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-22 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,395] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-29 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,395] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-44 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,395] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-14 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,395] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-23 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,395] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-38 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,395] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-8 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,395] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,395] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-45 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,395] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-15 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,395] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-30 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,395] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-0 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,395] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-35 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,395] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-5 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,395] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-20 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,395] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-27 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,395] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-42 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,395] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-12 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,395] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-21 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,395] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-36 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,395] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-6 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,395] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-43 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,395] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-13 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,395] TRACE [Broker id=1] Handling LeaderAndIsr request correlationId 1 from controller 1 epoch 1 starting the become-leader transition for partition __consumer_offsets-28 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,396] INFO [ReplicaFetcherManager on broker 1] Removed fetcher for partitions HashSet(__consumer_offsets-22, __consumer_offsets-30, __consumer_offsets-25, __consumer_offsets-35, __consumer_offsets-38, __consumer_offsets-13, __consumer_offsets-8, __consumer_offsets-21, __consumer_offsets-4, __consumer_offsets-27, __consumer_offsets-7, __consumer_offsets-9, __consumer_offsets-46, __consumer_offsets-41, __consumer_offsets-33, __consumer_offsets-23, __consumer_offsets-49, __consumer_offsets-47, __consumer_offsets-16, __consumer_offsets-28, __consumer_offsets-31, __consumer_offsets-36, __consumer_offsets-42, __consumer_offsets-3, __consumer_offsets-18, __consumer_offsets-37, policy-pdp-pap-0, __consumer_offsets-15, __consumer_offsets-24, __consumer_offsets-17, __consumer_offsets-48, __consumer_offsets-19, __consumer_offsets-11, __consumer_offsets-2, __consumer_offsets-43, __consumer_offsets-6, __consumer_offsets-14, __consumer_offsets-20, __consumer_offsets-0, __consumer_offsets-44, __consumer_offsets-39, __consumer_offsets-12, __consumer_offsets-45, __consumer_offsets-1, __consumer_offsets-5, __consumer_offsets-26, __consumer_offsets-29, __consumer_offsets-34, __consumer_offsets-10, __consumer_offsets-32, __consumer_offsets-40) (kafka.server.ReplicaFetcherManager) 23:16:33 kafka | [2024-03-01 23:14:36,396] INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 1 from controller 1 epoch 1 as part of the become-leader transition for 51 partitions (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,445] INFO [LogLoader partition=__consumer_offsets-3, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:33 kafka | [2024-03-01 23:14:36,455] INFO Created log for partition __consumer_offsets-3 in /var/lib/kafka/data/__consumer_offsets-3 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:33 kafka | [2024-03-01 23:14:36,457] INFO [Partition __consumer_offsets-3 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-3 (kafka.cluster.Partition) 23:16:33 kafka | [2024-03-01 23:14:36,458] INFO [Partition __consumer_offsets-3 broker=1] Log loaded for partition __consumer_offsets-3 with initial high watermark 0 (kafka.cluster.Partition) 23:16:33 kafka | [2024-03-01 23:14:36,459] INFO [Broker id=1] Leader __consumer_offsets-3 with topic id Some(qm-KDGJASSik3LmR-JsdxQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,477] INFO [LogLoader partition=__consumer_offsets-18, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:33 kafka | [2024-03-01 23:14:36,478] INFO Created log for partition __consumer_offsets-18 in /var/lib/kafka/data/__consumer_offsets-18 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:33 kafka | [2024-03-01 23:14:36,478] INFO [Partition __consumer_offsets-18 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-18 (kafka.cluster.Partition) 23:16:33 kafka | [2024-03-01 23:14:36,478] INFO [Partition __consumer_offsets-18 broker=1] Log loaded for partition __consumer_offsets-18 with initial high watermark 0 (kafka.cluster.Partition) 23:16:33 kafka | [2024-03-01 23:14:36,479] INFO [Broker id=1] Leader __consumer_offsets-18 with topic id Some(qm-KDGJASSik3LmR-JsdxQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,516] INFO [LogLoader partition=__consumer_offsets-41, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:33 kafka | [2024-03-01 23:14:36,517] INFO Created log for partition __consumer_offsets-41 in /var/lib/kafka/data/__consumer_offsets-41 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:33 kafka | [2024-03-01 23:14:36,517] INFO [Partition __consumer_offsets-41 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-41 (kafka.cluster.Partition) 23:16:33 kafka | [2024-03-01 23:14:36,517] INFO [Partition __consumer_offsets-41 broker=1] Log loaded for partition __consumer_offsets-41 with initial high watermark 0 (kafka.cluster.Partition) 23:16:33 kafka | [2024-03-01 23:14:36,518] INFO [Broker id=1] Leader __consumer_offsets-41 with topic id Some(qm-KDGJASSik3LmR-JsdxQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,533] INFO [LogLoader partition=__consumer_offsets-10, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:33 kafka | [2024-03-01 23:14:36,534] INFO Created log for partition __consumer_offsets-10 in /var/lib/kafka/data/__consumer_offsets-10 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:33 kafka | [2024-03-01 23:14:36,534] INFO [Partition __consumer_offsets-10 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-10 (kafka.cluster.Partition) 23:16:33 kafka | [2024-03-01 23:14:36,534] INFO [Partition __consumer_offsets-10 broker=1] Log loaded for partition __consumer_offsets-10 with initial high watermark 0 (kafka.cluster.Partition) 23:16:33 kafka | [2024-03-01 23:14:36,534] INFO [Broker id=1] Leader __consumer_offsets-10 with topic id Some(qm-KDGJASSik3LmR-JsdxQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,544] INFO [LogLoader partition=__consumer_offsets-33, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:33 kafka | [2024-03-01 23:14:36,545] INFO Created log for partition __consumer_offsets-33 in /var/lib/kafka/data/__consumer_offsets-33 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:33 kafka | [2024-03-01 23:14:36,545] INFO [Partition __consumer_offsets-33 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-33 (kafka.cluster.Partition) 23:16:33 kafka | [2024-03-01 23:14:36,545] INFO [Partition __consumer_offsets-33 broker=1] Log loaded for partition __consumer_offsets-33 with initial high watermark 0 (kafka.cluster.Partition) 23:16:33 kafka | [2024-03-01 23:14:36,545] INFO [Broker id=1] Leader __consumer_offsets-33 with topic id Some(qm-KDGJASSik3LmR-JsdxQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,554] INFO [LogLoader partition=__consumer_offsets-48, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:33 kafka | [2024-03-01 23:14:36,555] INFO Created log for partition __consumer_offsets-48 in /var/lib/kafka/data/__consumer_offsets-48 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:33 kafka | [2024-03-01 23:14:36,555] INFO [Partition __consumer_offsets-48 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-48 (kafka.cluster.Partition) 23:16:33 kafka | [2024-03-01 23:14:36,555] INFO [Partition __consumer_offsets-48 broker=1] Log loaded for partition __consumer_offsets-48 with initial high watermark 0 (kafka.cluster.Partition) 23:16:33 kafka | [2024-03-01 23:14:36,555] INFO [Broker id=1] Leader __consumer_offsets-48 with topic id Some(qm-KDGJASSik3LmR-JsdxQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,564] INFO [LogLoader partition=__consumer_offsets-19, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:33 kafka | [2024-03-01 23:14:36,565] INFO Created log for partition __consumer_offsets-19 in /var/lib/kafka/data/__consumer_offsets-19 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:33 kafka | [2024-03-01 23:14:36,565] INFO [Partition __consumer_offsets-19 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-19 (kafka.cluster.Partition) 23:16:33 kafka | [2024-03-01 23:14:36,565] INFO [Partition __consumer_offsets-19 broker=1] Log loaded for partition __consumer_offsets-19 with initial high watermark 0 (kafka.cluster.Partition) 23:16:33 kafka | [2024-03-01 23:14:36,565] INFO [Broker id=1] Leader __consumer_offsets-19 with topic id Some(qm-KDGJASSik3LmR-JsdxQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,579] INFO [LogLoader partition=__consumer_offsets-34, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:33 kafka | [2024-03-01 23:14:36,580] INFO Created log for partition __consumer_offsets-34 in /var/lib/kafka/data/__consumer_offsets-34 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:33 kafka | [2024-03-01 23:14:36,581] INFO [Partition __consumer_offsets-34 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-34 (kafka.cluster.Partition) 23:16:33 kafka | [2024-03-01 23:14:36,581] INFO [Partition __consumer_offsets-34 broker=1] Log loaded for partition __consumer_offsets-34 with initial high watermark 0 (kafka.cluster.Partition) 23:16:33 kafka | [2024-03-01 23:14:36,581] INFO [Broker id=1] Leader __consumer_offsets-34 with topic id Some(qm-KDGJASSik3LmR-JsdxQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,596] INFO [LogLoader partition=__consumer_offsets-4, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:33 kafka | [2024-03-01 23:14:36,597] INFO Created log for partition __consumer_offsets-4 in /var/lib/kafka/data/__consumer_offsets-4 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:33 kafka | [2024-03-01 23:14:36,597] INFO [Partition __consumer_offsets-4 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-4 (kafka.cluster.Partition) 23:16:33 kafka | [2024-03-01 23:14:36,597] INFO [Partition __consumer_offsets-4 broker=1] Log loaded for partition __consumer_offsets-4 with initial high watermark 0 (kafka.cluster.Partition) 23:16:33 kafka | [2024-03-01 23:14:36,597] INFO [Broker id=1] Leader __consumer_offsets-4 with topic id Some(qm-KDGJASSik3LmR-JsdxQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,606] INFO [LogLoader partition=__consumer_offsets-11, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:33 kafka | [2024-03-01 23:14:36,607] INFO Created log for partition __consumer_offsets-11 in /var/lib/kafka/data/__consumer_offsets-11 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:33 kafka | [2024-03-01 23:14:36,607] INFO [Partition __consumer_offsets-11 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-11 (kafka.cluster.Partition) 23:16:33 kafka | [2024-03-01 23:14:36,607] INFO [Partition __consumer_offsets-11 broker=1] Log loaded for partition __consumer_offsets-11 with initial high watermark 0 (kafka.cluster.Partition) 23:16:33 kafka | [2024-03-01 23:14:36,607] INFO [Broker id=1] Leader __consumer_offsets-11 with topic id Some(qm-KDGJASSik3LmR-JsdxQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,617] INFO [LogLoader partition=__consumer_offsets-26, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:33 kafka | [2024-03-01 23:14:36,618] INFO Created log for partition __consumer_offsets-26 in /var/lib/kafka/data/__consumer_offsets-26 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:33 kafka | [2024-03-01 23:14:36,618] INFO [Partition __consumer_offsets-26 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-26 (kafka.cluster.Partition) 23:16:33 kafka | [2024-03-01 23:14:36,618] INFO [Partition __consumer_offsets-26 broker=1] Log loaded for partition __consumer_offsets-26 with initial high watermark 0 (kafka.cluster.Partition) 23:16:33 kafka | [2024-03-01 23:14:36,618] INFO [Broker id=1] Leader __consumer_offsets-26 with topic id Some(qm-KDGJASSik3LmR-JsdxQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,626] INFO [LogLoader partition=__consumer_offsets-49, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:33 kafka | [2024-03-01 23:14:36,626] INFO Created log for partition __consumer_offsets-49 in /var/lib/kafka/data/__consumer_offsets-49 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:33 kafka | [2024-03-01 23:14:36,626] INFO [Partition __consumer_offsets-49 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-49 (kafka.cluster.Partition) 23:16:33 kafka | [2024-03-01 23:14:36,626] INFO [Partition __consumer_offsets-49 broker=1] Log loaded for partition __consumer_offsets-49 with initial high watermark 0 (kafka.cluster.Partition) 23:16:33 kafka | [2024-03-01 23:14:36,627] INFO [Broker id=1] Leader __consumer_offsets-49 with topic id Some(qm-KDGJASSik3LmR-JsdxQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,636] INFO [LogLoader partition=__consumer_offsets-39, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:33 kafka | [2024-03-01 23:14:36,636] INFO Created log for partition __consumer_offsets-39 in /var/lib/kafka/data/__consumer_offsets-39 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:33 kafka | [2024-03-01 23:14:36,636] INFO [Partition __consumer_offsets-39 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-39 (kafka.cluster.Partition) 23:16:33 kafka | [2024-03-01 23:14:36,637] INFO [Partition __consumer_offsets-39 broker=1] Log loaded for partition __consumer_offsets-39 with initial high watermark 0 (kafka.cluster.Partition) 23:16:33 kafka | [2024-03-01 23:14:36,637] INFO [Broker id=1] Leader __consumer_offsets-39 with topic id Some(qm-KDGJASSik3LmR-JsdxQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,649] INFO [LogLoader partition=__consumer_offsets-9, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:33 kafka | [2024-03-01 23:14:36,655] INFO Created log for partition __consumer_offsets-9 in /var/lib/kafka/data/__consumer_offsets-9 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:33 kafka | [2024-03-01 23:14:36,655] INFO [Partition __consumer_offsets-9 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-9 (kafka.cluster.Partition) 23:16:33 kafka | [2024-03-01 23:14:36,656] INFO [Partition __consumer_offsets-9 broker=1] Log loaded for partition __consumer_offsets-9 with initial high watermark 0 (kafka.cluster.Partition) 23:16:33 kafka | [2024-03-01 23:14:36,656] INFO [Broker id=1] Leader __consumer_offsets-9 with topic id Some(qm-KDGJASSik3LmR-JsdxQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,666] INFO [LogLoader partition=__consumer_offsets-24, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:33 kafka | [2024-03-01 23:14:36,668] INFO Created log for partition __consumer_offsets-24 in /var/lib/kafka/data/__consumer_offsets-24 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:33 kafka | [2024-03-01 23:14:36,668] INFO [Partition __consumer_offsets-24 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-24 (kafka.cluster.Partition) 23:16:33 kafka | [2024-03-01 23:14:36,668] INFO [Partition __consumer_offsets-24 broker=1] Log loaded for partition __consumer_offsets-24 with initial high watermark 0 (kafka.cluster.Partition) 23:16:33 kafka | [2024-03-01 23:14:36,668] INFO [Broker id=1] Leader __consumer_offsets-24 with topic id Some(qm-KDGJASSik3LmR-JsdxQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,675] INFO [LogLoader partition=__consumer_offsets-31, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:33 kafka | [2024-03-01 23:14:36,676] INFO Created log for partition __consumer_offsets-31 in /var/lib/kafka/data/__consumer_offsets-31 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:33 kafka | [2024-03-01 23:14:36,676] INFO [Partition __consumer_offsets-31 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-31 (kafka.cluster.Partition) 23:16:33 kafka | [2024-03-01 23:14:36,689] INFO [Partition __consumer_offsets-31 broker=1] Log loaded for partition __consumer_offsets-31 with initial high watermark 0 (kafka.cluster.Partition) 23:16:33 kafka | [2024-03-01 23:14:36,690] INFO [Broker id=1] Leader __consumer_offsets-31 with topic id Some(qm-KDGJASSik3LmR-JsdxQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,703] INFO [LogLoader partition=__consumer_offsets-46, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:33 kafka | [2024-03-01 23:14:36,704] INFO Created log for partition __consumer_offsets-46 in /var/lib/kafka/data/__consumer_offsets-46 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:33 kafka | [2024-03-01 23:14:36,704] INFO [Partition __consumer_offsets-46 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-46 (kafka.cluster.Partition) 23:16:33 kafka | [2024-03-01 23:14:36,704] INFO [Partition __consumer_offsets-46 broker=1] Log loaded for partition __consumer_offsets-46 with initial high watermark 0 (kafka.cluster.Partition) 23:16:33 kafka | [2024-03-01 23:14:36,704] INFO [Broker id=1] Leader __consumer_offsets-46 with topic id Some(qm-KDGJASSik3LmR-JsdxQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,714] INFO [LogLoader partition=__consumer_offsets-1, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:33 kafka | [2024-03-01 23:14:36,715] INFO Created log for partition __consumer_offsets-1 in /var/lib/kafka/data/__consumer_offsets-1 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:33 kafka | [2024-03-01 23:14:36,715] INFO [Partition __consumer_offsets-1 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-1 (kafka.cluster.Partition) 23:16:33 kafka | [2024-03-01 23:14:36,715] INFO [Partition __consumer_offsets-1 broker=1] Log loaded for partition __consumer_offsets-1 with initial high watermark 0 (kafka.cluster.Partition) 23:16:33 kafka | [2024-03-01 23:14:36,715] INFO [Broker id=1] Leader __consumer_offsets-1 with topic id Some(qm-KDGJASSik3LmR-JsdxQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,723] INFO [LogLoader partition=__consumer_offsets-16, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:33 kafka | [2024-03-01 23:14:36,724] INFO Created log for partition __consumer_offsets-16 in /var/lib/kafka/data/__consumer_offsets-16 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:33 kafka | [2024-03-01 23:14:36,724] INFO [Partition __consumer_offsets-16 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-16 (kafka.cluster.Partition) 23:16:33 kafka | [2024-03-01 23:14:36,724] INFO [Partition __consumer_offsets-16 broker=1] Log loaded for partition __consumer_offsets-16 with initial high watermark 0 (kafka.cluster.Partition) 23:16:33 kafka | [2024-03-01 23:14:36,725] INFO [Broker id=1] Leader __consumer_offsets-16 with topic id Some(qm-KDGJASSik3LmR-JsdxQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,734] INFO [LogLoader partition=__consumer_offsets-2, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:33 kafka | [2024-03-01 23:14:36,734] INFO Created log for partition __consumer_offsets-2 in /var/lib/kafka/data/__consumer_offsets-2 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:33 kafka | [2024-03-01 23:14:36,739] INFO [Partition __consumer_offsets-2 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-2 (kafka.cluster.Partition) 23:16:33 kafka | [2024-03-01 23:14:36,741] INFO [Partition __consumer_offsets-2 broker=1] Log loaded for partition __consumer_offsets-2 with initial high watermark 0 (kafka.cluster.Partition) 23:16:33 kafka | [2024-03-01 23:14:36,741] INFO [Broker id=1] Leader __consumer_offsets-2 with topic id Some(qm-KDGJASSik3LmR-JsdxQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,757] INFO [LogLoader partition=__consumer_offsets-25, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:33 kafka | [2024-03-01 23:14:36,763] INFO Created log for partition __consumer_offsets-25 in /var/lib/kafka/data/__consumer_offsets-25 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:33 kafka | [2024-03-01 23:14:36,763] INFO [Partition __consumer_offsets-25 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-25 (kafka.cluster.Partition) 23:16:33 kafka | [2024-03-01 23:14:36,763] INFO [Partition __consumer_offsets-25 broker=1] Log loaded for partition __consumer_offsets-25 with initial high watermark 0 (kafka.cluster.Partition) 23:16:33 kafka | [2024-03-01 23:14:36,763] INFO [Broker id=1] Leader __consumer_offsets-25 with topic id Some(qm-KDGJASSik3LmR-JsdxQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,773] INFO [LogLoader partition=__consumer_offsets-40, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:33 kafka | [2024-03-01 23:14:36,774] INFO Created log for partition __consumer_offsets-40 in /var/lib/kafka/data/__consumer_offsets-40 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:33 kafka | [2024-03-01 23:14:36,774] INFO [Partition __consumer_offsets-40 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-40 (kafka.cluster.Partition) 23:16:33 kafka | [2024-03-01 23:14:36,774] INFO [Partition __consumer_offsets-40 broker=1] Log loaded for partition __consumer_offsets-40 with initial high watermark 0 (kafka.cluster.Partition) 23:16:33 kafka | [2024-03-01 23:14:36,774] INFO [Broker id=1] Leader __consumer_offsets-40 with topic id Some(qm-KDGJASSik3LmR-JsdxQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,781] INFO [LogLoader partition=__consumer_offsets-47, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:33 kafka | [2024-03-01 23:14:36,782] INFO Created log for partition __consumer_offsets-47 in /var/lib/kafka/data/__consumer_offsets-47 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:33 kafka | [2024-03-01 23:14:36,782] INFO [Partition __consumer_offsets-47 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-47 (kafka.cluster.Partition) 23:16:33 kafka | [2024-03-01 23:14:36,782] INFO [Partition __consumer_offsets-47 broker=1] Log loaded for partition __consumer_offsets-47 with initial high watermark 0 (kafka.cluster.Partition) 23:16:33 kafka | [2024-03-01 23:14:36,782] INFO [Broker id=1] Leader __consumer_offsets-47 with topic id Some(qm-KDGJASSik3LmR-JsdxQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,798] INFO [LogLoader partition=__consumer_offsets-17, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:33 kafka | [2024-03-01 23:14:36,801] INFO Created log for partition __consumer_offsets-17 in /var/lib/kafka/data/__consumer_offsets-17 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:33 kafka | [2024-03-01 23:14:36,801] INFO [Partition __consumer_offsets-17 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-17 (kafka.cluster.Partition) 23:16:33 kafka | [2024-03-01 23:14:36,801] INFO [Partition __consumer_offsets-17 broker=1] Log loaded for partition __consumer_offsets-17 with initial high watermark 0 (kafka.cluster.Partition) 23:16:33 kafka | [2024-03-01 23:14:36,801] INFO [Broker id=1] Leader __consumer_offsets-17 with topic id Some(qm-KDGJASSik3LmR-JsdxQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,814] INFO [LogLoader partition=__consumer_offsets-32, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:33 kafka | [2024-03-01 23:14:36,815] INFO Created log for partition __consumer_offsets-32 in /var/lib/kafka/data/__consumer_offsets-32 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:33 kafka | [2024-03-01 23:14:36,815] INFO [Partition __consumer_offsets-32 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-32 (kafka.cluster.Partition) 23:16:33 kafka | [2024-03-01 23:14:36,815] INFO [Partition __consumer_offsets-32 broker=1] Log loaded for partition __consumer_offsets-32 with initial high watermark 0 (kafka.cluster.Partition) 23:16:33 kafka | [2024-03-01 23:14:36,815] INFO [Broker id=1] Leader __consumer_offsets-32 with topic id Some(qm-KDGJASSik3LmR-JsdxQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,822] INFO [LogLoader partition=__consumer_offsets-37, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:33 kafka | [2024-03-01 23:14:36,824] INFO Created log for partition __consumer_offsets-37 in /var/lib/kafka/data/__consumer_offsets-37 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:33 kafka | [2024-03-01 23:14:36,824] INFO [Partition __consumer_offsets-37 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-37 (kafka.cluster.Partition) 23:16:33 kafka | [2024-03-01 23:14:36,825] INFO [Partition __consumer_offsets-37 broker=1] Log loaded for partition __consumer_offsets-37 with initial high watermark 0 (kafka.cluster.Partition) 23:16:33 kafka | [2024-03-01 23:14:36,825] INFO [Broker id=1] Leader __consumer_offsets-37 with topic id Some(qm-KDGJASSik3LmR-JsdxQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,832] INFO [LogLoader partition=__consumer_offsets-7, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:33 kafka | [2024-03-01 23:14:36,833] INFO Created log for partition __consumer_offsets-7 in /var/lib/kafka/data/__consumer_offsets-7 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:33 kafka | [2024-03-01 23:14:36,833] INFO [Partition __consumer_offsets-7 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-7 (kafka.cluster.Partition) 23:16:33 kafka | [2024-03-01 23:14:36,833] INFO [Partition __consumer_offsets-7 broker=1] Log loaded for partition __consumer_offsets-7 with initial high watermark 0 (kafka.cluster.Partition) 23:16:33 kafka | [2024-03-01 23:14:36,833] INFO [Broker id=1] Leader __consumer_offsets-7 with topic id Some(qm-KDGJASSik3LmR-JsdxQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,839] INFO [LogLoader partition=__consumer_offsets-22, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:33 kafka | [2024-03-01 23:14:36,839] INFO Created log for partition __consumer_offsets-22 in /var/lib/kafka/data/__consumer_offsets-22 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:33 kafka | [2024-03-01 23:14:36,839] INFO [Partition __consumer_offsets-22 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-22 (kafka.cluster.Partition) 23:16:33 kafka | [2024-03-01 23:14:36,839] INFO [Partition __consumer_offsets-22 broker=1] Log loaded for partition __consumer_offsets-22 with initial high watermark 0 (kafka.cluster.Partition) 23:16:33 kafka | [2024-03-01 23:14:36,839] INFO [Broker id=1] Leader __consumer_offsets-22 with topic id Some(qm-KDGJASSik3LmR-JsdxQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,845] INFO [LogLoader partition=__consumer_offsets-29, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:33 kafka | [2024-03-01 23:14:36,846] INFO Created log for partition __consumer_offsets-29 in /var/lib/kafka/data/__consumer_offsets-29 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:33 kafka | [2024-03-01 23:14:36,846] INFO [Partition __consumer_offsets-29 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-29 (kafka.cluster.Partition) 23:16:33 kafka | [2024-03-01 23:14:36,846] INFO [Partition __consumer_offsets-29 broker=1] Log loaded for partition __consumer_offsets-29 with initial high watermark 0 (kafka.cluster.Partition) 23:16:33 kafka | [2024-03-01 23:14:36,846] INFO [Broker id=1] Leader __consumer_offsets-29 with topic id Some(qm-KDGJASSik3LmR-JsdxQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,856] INFO [LogLoader partition=__consumer_offsets-44, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:33 kafka | [2024-03-01 23:14:36,857] INFO Created log for partition __consumer_offsets-44 in /var/lib/kafka/data/__consumer_offsets-44 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:33 kafka | [2024-03-01 23:14:36,857] INFO [Partition __consumer_offsets-44 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-44 (kafka.cluster.Partition) 23:16:33 kafka | [2024-03-01 23:14:36,857] INFO [Partition __consumer_offsets-44 broker=1] Log loaded for partition __consumer_offsets-44 with initial high watermark 0 (kafka.cluster.Partition) 23:16:33 kafka | [2024-03-01 23:14:36,857] INFO [Broker id=1] Leader __consumer_offsets-44 with topic id Some(qm-KDGJASSik3LmR-JsdxQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,865] INFO [LogLoader partition=__consumer_offsets-14, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:33 kafka | [2024-03-01 23:14:36,866] INFO Created log for partition __consumer_offsets-14 in /var/lib/kafka/data/__consumer_offsets-14 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:33 kafka | [2024-03-01 23:14:36,866] INFO [Partition __consumer_offsets-14 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-14 (kafka.cluster.Partition) 23:16:33 kafka | [2024-03-01 23:14:36,866] INFO [Partition __consumer_offsets-14 broker=1] Log loaded for partition __consumer_offsets-14 with initial high watermark 0 (kafka.cluster.Partition) 23:16:33 kafka | [2024-03-01 23:14:36,866] INFO [Broker id=1] Leader __consumer_offsets-14 with topic id Some(qm-KDGJASSik3LmR-JsdxQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,874] INFO [LogLoader partition=__consumer_offsets-23, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:33 kafka | [2024-03-01 23:14:36,875] INFO Created log for partition __consumer_offsets-23 in /var/lib/kafka/data/__consumer_offsets-23 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:33 kafka | [2024-03-01 23:14:36,875] INFO [Partition __consumer_offsets-23 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-23 (kafka.cluster.Partition) 23:16:33 kafka | [2024-03-01 23:14:36,875] INFO [Partition __consumer_offsets-23 broker=1] Log loaded for partition __consumer_offsets-23 with initial high watermark 0 (kafka.cluster.Partition) 23:16:33 kafka | [2024-03-01 23:14:36,875] INFO [Broker id=1] Leader __consumer_offsets-23 with topic id Some(qm-KDGJASSik3LmR-JsdxQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,882] INFO [LogLoader partition=__consumer_offsets-38, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:33 kafka | [2024-03-01 23:14:36,882] INFO Created log for partition __consumer_offsets-38 in /var/lib/kafka/data/__consumer_offsets-38 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:33 kafka | [2024-03-01 23:14:36,882] INFO [Partition __consumer_offsets-38 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-38 (kafka.cluster.Partition) 23:16:33 kafka | [2024-03-01 23:14:36,883] INFO [Partition __consumer_offsets-38 broker=1] Log loaded for partition __consumer_offsets-38 with initial high watermark 0 (kafka.cluster.Partition) 23:16:33 kafka | [2024-03-01 23:14:36,883] INFO [Broker id=1] Leader __consumer_offsets-38 with topic id Some(qm-KDGJASSik3LmR-JsdxQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,889] INFO [LogLoader partition=__consumer_offsets-8, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:33 kafka | [2024-03-01 23:14:36,891] INFO Created log for partition __consumer_offsets-8 in /var/lib/kafka/data/__consumer_offsets-8 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:33 kafka | [2024-03-01 23:14:36,891] INFO [Partition __consumer_offsets-8 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-8 (kafka.cluster.Partition) 23:16:33 kafka | [2024-03-01 23:14:36,891] INFO [Partition __consumer_offsets-8 broker=1] Log loaded for partition __consumer_offsets-8 with initial high watermark 0 (kafka.cluster.Partition) 23:16:33 kafka | [2024-03-01 23:14:36,891] INFO [Broker id=1] Leader __consumer_offsets-8 with topic id Some(qm-KDGJASSik3LmR-JsdxQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,897] INFO [LogLoader partition=policy-pdp-pap-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:33 kafka | [2024-03-01 23:14:36,897] INFO Created log for partition policy-pdp-pap-0 in /var/lib/kafka/data/policy-pdp-pap-0 with properties {} (kafka.log.LogManager) 23:16:33 kafka | [2024-03-01 23:14:36,897] INFO [Partition policy-pdp-pap-0 broker=1] No checkpointed highwatermark is found for partition policy-pdp-pap-0 (kafka.cluster.Partition) 23:16:33 kafka | [2024-03-01 23:14:36,897] INFO [Partition policy-pdp-pap-0 broker=1] Log loaded for partition policy-pdp-pap-0 with initial high watermark 0 (kafka.cluster.Partition) 23:16:33 kafka | [2024-03-01 23:14:36,898] INFO [Broker id=1] Leader policy-pdp-pap-0 with topic id Some(Ov6CtVzQRUKLA8r33TyiiA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,909] INFO [LogLoader partition=__consumer_offsets-45, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:33 kafka | [2024-03-01 23:14:36,909] INFO Created log for partition __consumer_offsets-45 in /var/lib/kafka/data/__consumer_offsets-45 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:33 kafka | [2024-03-01 23:14:36,909] INFO [Partition __consumer_offsets-45 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-45 (kafka.cluster.Partition) 23:16:33 kafka | [2024-03-01 23:14:36,910] INFO [Partition __consumer_offsets-45 broker=1] Log loaded for partition __consumer_offsets-45 with initial high watermark 0 (kafka.cluster.Partition) 23:16:33 kafka | [2024-03-01 23:14:36,910] INFO [Broker id=1] Leader __consumer_offsets-45 with topic id Some(qm-KDGJASSik3LmR-JsdxQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,916] INFO [LogLoader partition=__consumer_offsets-15, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:33 kafka | [2024-03-01 23:14:36,916] INFO Created log for partition __consumer_offsets-15 in /var/lib/kafka/data/__consumer_offsets-15 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:33 kafka | [2024-03-01 23:14:36,916] INFO [Partition __consumer_offsets-15 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-15 (kafka.cluster.Partition) 23:16:33 kafka | [2024-03-01 23:14:36,916] INFO [Partition __consumer_offsets-15 broker=1] Log loaded for partition __consumer_offsets-15 with initial high watermark 0 (kafka.cluster.Partition) 23:16:33 kafka | [2024-03-01 23:14:36,916] INFO [Broker id=1] Leader __consumer_offsets-15 with topic id Some(qm-KDGJASSik3LmR-JsdxQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,923] INFO [LogLoader partition=__consumer_offsets-30, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:33 kafka | [2024-03-01 23:14:36,923] INFO Created log for partition __consumer_offsets-30 in /var/lib/kafka/data/__consumer_offsets-30 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:33 kafka | [2024-03-01 23:14:36,923] INFO [Partition __consumer_offsets-30 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-30 (kafka.cluster.Partition) 23:16:33 kafka | [2024-03-01 23:14:36,923] INFO [Partition __consumer_offsets-30 broker=1] Log loaded for partition __consumer_offsets-30 with initial high watermark 0 (kafka.cluster.Partition) 23:16:33 kafka | [2024-03-01 23:14:36,923] INFO [Broker id=1] Leader __consumer_offsets-30 with topic id Some(qm-KDGJASSik3LmR-JsdxQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,930] INFO [LogLoader partition=__consumer_offsets-0, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:33 kafka | [2024-03-01 23:14:36,930] INFO Created log for partition __consumer_offsets-0 in /var/lib/kafka/data/__consumer_offsets-0 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:33 kafka | [2024-03-01 23:14:36,930] INFO [Partition __consumer_offsets-0 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-0 (kafka.cluster.Partition) 23:16:33 kafka | [2024-03-01 23:14:36,931] INFO [Partition __consumer_offsets-0 broker=1] Log loaded for partition __consumer_offsets-0 with initial high watermark 0 (kafka.cluster.Partition) 23:16:33 kafka | [2024-03-01 23:14:36,931] INFO [Broker id=1] Leader __consumer_offsets-0 with topic id Some(qm-KDGJASSik3LmR-JsdxQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,937] INFO [LogLoader partition=__consumer_offsets-35, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:33 kafka | [2024-03-01 23:14:36,937] INFO Created log for partition __consumer_offsets-35 in /var/lib/kafka/data/__consumer_offsets-35 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:33 kafka | [2024-03-01 23:14:36,937] INFO [Partition __consumer_offsets-35 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-35 (kafka.cluster.Partition) 23:16:33 kafka | [2024-03-01 23:14:36,938] INFO [Partition __consumer_offsets-35 broker=1] Log loaded for partition __consumer_offsets-35 with initial high watermark 0 (kafka.cluster.Partition) 23:16:33 kafka | [2024-03-01 23:14:36,938] INFO [Broker id=1] Leader __consumer_offsets-35 with topic id Some(qm-KDGJASSik3LmR-JsdxQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,944] INFO [LogLoader partition=__consumer_offsets-5, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:33 kafka | [2024-03-01 23:14:36,944] INFO Created log for partition __consumer_offsets-5 in /var/lib/kafka/data/__consumer_offsets-5 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:33 kafka | [2024-03-01 23:14:36,944] INFO [Partition __consumer_offsets-5 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-5 (kafka.cluster.Partition) 23:16:33 kafka | [2024-03-01 23:14:36,944] INFO [Partition __consumer_offsets-5 broker=1] Log loaded for partition __consumer_offsets-5 with initial high watermark 0 (kafka.cluster.Partition) 23:16:33 kafka | [2024-03-01 23:14:36,944] INFO [Broker id=1] Leader __consumer_offsets-5 with topic id Some(qm-KDGJASSik3LmR-JsdxQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,951] INFO [LogLoader partition=__consumer_offsets-20, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:33 kafka | [2024-03-01 23:14:36,951] INFO Created log for partition __consumer_offsets-20 in /var/lib/kafka/data/__consumer_offsets-20 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:33 kafka | [2024-03-01 23:14:36,951] INFO [Partition __consumer_offsets-20 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-20 (kafka.cluster.Partition) 23:16:33 kafka | [2024-03-01 23:14:36,951] INFO [Partition __consumer_offsets-20 broker=1] Log loaded for partition __consumer_offsets-20 with initial high watermark 0 (kafka.cluster.Partition) 23:16:33 kafka | [2024-03-01 23:14:36,952] INFO [Broker id=1] Leader __consumer_offsets-20 with topic id Some(qm-KDGJASSik3LmR-JsdxQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,959] INFO [LogLoader partition=__consumer_offsets-27, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:33 kafka | [2024-03-01 23:14:36,961] INFO Created log for partition __consumer_offsets-27 in /var/lib/kafka/data/__consumer_offsets-27 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:33 kafka | [2024-03-01 23:14:36,961] INFO [Partition __consumer_offsets-27 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-27 (kafka.cluster.Partition) 23:16:33 kafka | [2024-03-01 23:14:36,961] INFO [Partition __consumer_offsets-27 broker=1] Log loaded for partition __consumer_offsets-27 with initial high watermark 0 (kafka.cluster.Partition) 23:16:33 kafka | [2024-03-01 23:14:36,961] INFO [Broker id=1] Leader __consumer_offsets-27 with topic id Some(qm-KDGJASSik3LmR-JsdxQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,967] INFO [LogLoader partition=__consumer_offsets-42, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:33 kafka | [2024-03-01 23:14:36,969] INFO Created log for partition __consumer_offsets-42 in /var/lib/kafka/data/__consumer_offsets-42 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:33 kafka | [2024-03-01 23:14:36,969] INFO [Partition __consumer_offsets-42 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-42 (kafka.cluster.Partition) 23:16:33 kafka | [2024-03-01 23:14:36,969] INFO [Partition __consumer_offsets-42 broker=1] Log loaded for partition __consumer_offsets-42 with initial high watermark 0 (kafka.cluster.Partition) 23:16:33 kafka | [2024-03-01 23:14:36,969] INFO [Broker id=1] Leader __consumer_offsets-42 with topic id Some(qm-KDGJASSik3LmR-JsdxQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,978] INFO [LogLoader partition=__consumer_offsets-12, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:33 kafka | [2024-03-01 23:14:36,979] INFO Created log for partition __consumer_offsets-12 in /var/lib/kafka/data/__consumer_offsets-12 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:33 kafka | [2024-03-01 23:14:36,979] INFO [Partition __consumer_offsets-12 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-12 (kafka.cluster.Partition) 23:16:33 kafka | [2024-03-01 23:14:36,979] INFO [Partition __consumer_offsets-12 broker=1] Log loaded for partition __consumer_offsets-12 with initial high watermark 0 (kafka.cluster.Partition) 23:16:33 kafka | [2024-03-01 23:14:36,979] INFO [Broker id=1] Leader __consumer_offsets-12 with topic id Some(qm-KDGJASSik3LmR-JsdxQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:36,989] INFO [LogLoader partition=__consumer_offsets-21, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:33 kafka | [2024-03-01 23:14:36,990] INFO Created log for partition __consumer_offsets-21 in /var/lib/kafka/data/__consumer_offsets-21 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:33 kafka | [2024-03-01 23:14:36,990] INFO [Partition __consumer_offsets-21 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-21 (kafka.cluster.Partition) 23:16:33 kafka | [2024-03-01 23:14:36,991] INFO [Partition __consumer_offsets-21 broker=1] Log loaded for partition __consumer_offsets-21 with initial high watermark 0 (kafka.cluster.Partition) 23:16:33 kafka | [2024-03-01 23:14:36,991] INFO [Broker id=1] Leader __consumer_offsets-21 with topic id Some(qm-KDGJASSik3LmR-JsdxQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:37,002] INFO [LogLoader partition=__consumer_offsets-36, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:33 kafka | [2024-03-01 23:14:37,004] INFO Created log for partition __consumer_offsets-36 in /var/lib/kafka/data/__consumer_offsets-36 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:33 kafka | [2024-03-01 23:14:37,004] INFO [Partition __consumer_offsets-36 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-36 (kafka.cluster.Partition) 23:16:33 kafka | [2024-03-01 23:14:37,004] INFO [Partition __consumer_offsets-36 broker=1] Log loaded for partition __consumer_offsets-36 with initial high watermark 0 (kafka.cluster.Partition) 23:16:33 kafka | [2024-03-01 23:14:37,005] INFO [Broker id=1] Leader __consumer_offsets-36 with topic id Some(qm-KDGJASSik3LmR-JsdxQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:37,046] INFO [LogLoader partition=__consumer_offsets-6, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:33 kafka | [2024-03-01 23:14:37,047] INFO Created log for partition __consumer_offsets-6 in /var/lib/kafka/data/__consumer_offsets-6 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:33 kafka | [2024-03-01 23:14:37,047] INFO [Partition __consumer_offsets-6 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-6 (kafka.cluster.Partition) 23:16:33 kafka | [2024-03-01 23:14:37,047] INFO [Partition __consumer_offsets-6 broker=1] Log loaded for partition __consumer_offsets-6 with initial high watermark 0 (kafka.cluster.Partition) 23:16:33 kafka | [2024-03-01 23:14:37,047] INFO [Broker id=1] Leader __consumer_offsets-6 with topic id Some(qm-KDGJASSik3LmR-JsdxQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:37,053] INFO [LogLoader partition=__consumer_offsets-43, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:33 kafka | [2024-03-01 23:14:37,053] INFO Created log for partition __consumer_offsets-43 in /var/lib/kafka/data/__consumer_offsets-43 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:33 kafka | [2024-03-01 23:14:37,053] INFO [Partition __consumer_offsets-43 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-43 (kafka.cluster.Partition) 23:16:33 kafka | [2024-03-01 23:14:37,053] INFO [Partition __consumer_offsets-43 broker=1] Log loaded for partition __consumer_offsets-43 with initial high watermark 0 (kafka.cluster.Partition) 23:16:33 kafka | [2024-03-01 23:14:37,053] INFO [Broker id=1] Leader __consumer_offsets-43 with topic id Some(qm-KDGJASSik3LmR-JsdxQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:37,060] INFO [LogLoader partition=__consumer_offsets-13, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:33 kafka | [2024-03-01 23:14:37,060] INFO Created log for partition __consumer_offsets-13 in /var/lib/kafka/data/__consumer_offsets-13 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:33 kafka | [2024-03-01 23:14:37,060] INFO [Partition __consumer_offsets-13 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-13 (kafka.cluster.Partition) 23:16:33 kafka | [2024-03-01 23:14:37,060] INFO [Partition __consumer_offsets-13 broker=1] Log loaded for partition __consumer_offsets-13 with initial high watermark 0 (kafka.cluster.Partition) 23:16:33 kafka | [2024-03-01 23:14:37,060] INFO [Broker id=1] Leader __consumer_offsets-13 with topic id Some(qm-KDGJASSik3LmR-JsdxQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:37,069] INFO [LogLoader partition=__consumer_offsets-28, dir=/var/lib/kafka/data] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) 23:16:33 kafka | [2024-03-01 23:14:37,070] INFO Created log for partition __consumer_offsets-28 in /var/lib/kafka/data/__consumer_offsets-28 with properties {cleanup.policy=compact, compression.type="producer", segment.bytes=104857600} (kafka.log.LogManager) 23:16:33 kafka | [2024-03-01 23:14:37,070] INFO [Partition __consumer_offsets-28 broker=1] No checkpointed highwatermark is found for partition __consumer_offsets-28 (kafka.cluster.Partition) 23:16:33 kafka | [2024-03-01 23:14:37,070] INFO [Partition __consumer_offsets-28 broker=1] Log loaded for partition __consumer_offsets-28 with initial high watermark 0 (kafka.cluster.Partition) 23:16:33 kafka | [2024-03-01 23:14:37,070] INFO [Broker id=1] Leader __consumer_offsets-28 with topic id Some(qm-KDGJASSik3LmR-JsdxQ) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:37,077] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-3 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:37,077] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-18 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:37,077] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-41 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:37,077] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-10 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:37,077] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-33 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:37,077] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-48 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:37,077] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-19 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:37,077] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-34 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:37,077] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-4 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:37,077] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-11 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:37,077] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-26 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:37,077] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-49 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:37,077] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-39 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:37,077] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-9 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:37,077] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-24 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:37,077] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-31 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:37,077] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-46 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:37,077] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-1 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:37,077] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-16 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:37,077] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-2 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:37,077] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-25 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:37,077] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-40 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:37,077] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-47 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:37,077] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-17 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:37,077] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-32 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:37,078] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-37 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:37,078] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-7 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:37,078] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-22 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:37,078] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-29 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:37,078] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-44 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:37,078] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-14 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:37,078] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-23 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:37,078] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-38 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:37,078] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-8 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:37,078] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition policy-pdp-pap-0 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:37,078] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-45 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:37,078] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-15 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:37,078] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-30 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:37,078] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-0 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:37,078] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-35 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:37,078] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-5 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:37,078] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-20 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:37,078] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-27 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:37,078] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-42 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:37,078] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-12 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:37,078] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-21 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:37,078] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-36 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:37,078] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-6 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:37,078] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-43 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:37,078] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-13 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:37,078] TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 1 from controller 1 epoch 1 for the become-leader transition for partition __consumer_offsets-28 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:37,088] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 3 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:33 kafka | [2024-03-01 23:14:37,090] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-3 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:33 kafka | [2024-03-01 23:14:37,092] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 18 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:33 kafka | [2024-03-01 23:14:37,092] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-18 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:33 kafka | [2024-03-01 23:14:37,093] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 41 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:33 kafka | [2024-03-01 23:14:37,093] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-41 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:33 kafka | [2024-03-01 23:14:37,093] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 10 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:33 kafka | [2024-03-01 23:14:37,093] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-10 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:33 kafka | [2024-03-01 23:14:37,093] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 33 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:33 kafka | [2024-03-01 23:14:37,093] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-33 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:33 kafka | [2024-03-01 23:14:37,093] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 48 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:33 kafka | [2024-03-01 23:14:37,093] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-48 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:33 kafka | [2024-03-01 23:14:37,093] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 19 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:33 kafka | [2024-03-01 23:14:37,093] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-19 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:33 kafka | [2024-03-01 23:14:37,093] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 34 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:33 kafka | [2024-03-01 23:14:37,093] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-34 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:33 kafka | [2024-03-01 23:14:37,093] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 4 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:33 kafka | [2024-03-01 23:14:37,093] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-4 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:33 kafka | [2024-03-01 23:14:37,093] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 11 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:33 kafka | [2024-03-01 23:14:37,093] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-11 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:33 kafka | [2024-03-01 23:14:37,093] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 26 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:33 kafka | [2024-03-01 23:14:37,093] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-26 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:33 kafka | [2024-03-01 23:14:37,093] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 49 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:33 kafka | [2024-03-01 23:14:37,093] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-49 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:33 kafka | [2024-03-01 23:14:37,093] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 39 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:33 kafka | [2024-03-01 23:14:37,094] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-39 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:33 kafka | [2024-03-01 23:14:37,094] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 9 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:33 kafka | [2024-03-01 23:14:37,094] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-9 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:33 kafka | [2024-03-01 23:14:37,094] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 24 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:33 kafka | [2024-03-01 23:14:37,094] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-24 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:33 kafka | [2024-03-01 23:14:37,094] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 31 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:33 kafka | [2024-03-01 23:14:37,094] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-31 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:33 kafka | [2024-03-01 23:14:37,094] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 46 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:33 kafka | [2024-03-01 23:14:37,094] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-46 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:33 kafka | [2024-03-01 23:14:37,094] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 1 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:33 kafka | [2024-03-01 23:14:37,094] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-1 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:33 kafka | [2024-03-01 23:14:37,094] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 16 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:33 kafka | [2024-03-01 23:14:37,094] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-16 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:33 kafka | [2024-03-01 23:14:37,095] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 2 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:33 kafka | [2024-03-01 23:14:37,095] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-2 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:33 kafka | [2024-03-01 23:14:37,095] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 25 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:33 kafka | [2024-03-01 23:14:37,095] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-25 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:33 kafka | [2024-03-01 23:14:37,095] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 40 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:33 kafka | [2024-03-01 23:14:37,095] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-40 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:33 kafka | [2024-03-01 23:14:37,095] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 47 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:33 kafka | [2024-03-01 23:14:37,095] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-47 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:33 kafka | [2024-03-01 23:14:37,095] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 17 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:33 kafka | [2024-03-01 23:14:37,095] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-17 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:33 kafka | [2024-03-01 23:14:37,095] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 32 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:33 kafka | [2024-03-01 23:14:37,095] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-32 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:33 kafka | [2024-03-01 23:14:37,095] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 37 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:33 kafka | [2024-03-01 23:14:37,095] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-37 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:33 kafka | [2024-03-01 23:14:37,095] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 7 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:33 kafka | [2024-03-01 23:14:37,095] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-7 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:33 kafka | [2024-03-01 23:14:37,095] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-3 in 4 milliseconds for epoch 0, of which 1 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:33 kafka | [2024-03-01 23:14:37,095] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 22 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:33 kafka | [2024-03-01 23:14:37,096] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-22 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:33 kafka | [2024-03-01 23:14:37,096] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-18 in 4 milliseconds for epoch 0, of which 4 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:33 kafka | [2024-03-01 23:14:37,096] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 29 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:33 kafka | [2024-03-01 23:14:37,096] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-41 in 3 milliseconds for epoch 0, of which 3 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:33 kafka | [2024-03-01 23:14:37,096] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-29 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:33 kafka | [2024-03-01 23:14:37,096] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-10 in 3 milliseconds for epoch 0, of which 3 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:33 kafka | [2024-03-01 23:14:37,097] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-33 in 4 milliseconds for epoch 0, of which 3 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:33 kafka | [2024-03-01 23:14:37,097] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 44 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:33 kafka | [2024-03-01 23:14:37,097] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-44 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:33 kafka | [2024-03-01 23:14:37,097] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-48 in 4 milliseconds for epoch 0, of which 4 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:33 kafka | [2024-03-01 23:14:37,097] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-19 in 4 milliseconds for epoch 0, of which 4 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:33 kafka | [2024-03-01 23:14:37,097] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 14 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:33 kafka | [2024-03-01 23:14:37,097] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-34 in 4 milliseconds for epoch 0, of which 4 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:33 kafka | [2024-03-01 23:14:37,097] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-14 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:33 kafka | [2024-03-01 23:14:37,097] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-4 in 4 milliseconds for epoch 0, of which 4 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:33 kafka | [2024-03-01 23:14:37,097] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 23 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:33 kafka | [2024-03-01 23:14:37,097] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-23 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:33 kafka | [2024-03-01 23:14:37,097] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-11 in 4 milliseconds for epoch 0, of which 4 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:33 kafka | [2024-03-01 23:14:37,097] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 38 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:33 kafka | [2024-03-01 23:14:37,097] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-26 in 4 milliseconds for epoch 0, of which 4 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:33 kafka | [2024-03-01 23:14:37,097] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-49 in 4 milliseconds for epoch 0, of which 4 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:33 kafka | [2024-03-01 23:14:37,097] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-39 in 3 milliseconds for epoch 0, of which 3 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:33 kafka | [2024-03-01 23:14:37,097] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-9 in 3 milliseconds for epoch 0, of which 3 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:33 kafka | [2024-03-01 23:14:37,098] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-24 in 4 milliseconds for epoch 0, of which 4 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:33 kafka | [2024-03-01 23:14:37,098] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-31 in 4 milliseconds for epoch 0, of which 4 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:33 kafka | [2024-03-01 23:14:37,098] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-46 in 4 milliseconds for epoch 0, of which 4 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:33 kafka | [2024-03-01 23:14:37,098] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-1 in 4 milliseconds for epoch 0, of which 4 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:33 kafka | [2024-03-01 23:14:37,098] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-16 in 3 milliseconds for epoch 0, of which 3 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:33 kafka | [2024-03-01 23:14:37,098] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-2 in 3 milliseconds for epoch 0, of which 3 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:33 kafka | [2024-03-01 23:14:37,098] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-25 in 3 milliseconds for epoch 0, of which 3 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:33 kafka | [2024-03-01 23:14:37,098] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-40 in 3 milliseconds for epoch 0, of which 3 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:33 kafka | [2024-03-01 23:14:37,098] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-47 in 3 milliseconds for epoch 0, of which 3 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:33 kafka | [2024-03-01 23:14:37,098] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-17 in 3 milliseconds for epoch 0, of which 3 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:33 kafka | [2024-03-01 23:14:37,098] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-32 in 3 milliseconds for epoch 0, of which 3 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:33 kafka | [2024-03-01 23:14:37,099] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-37 in 4 milliseconds for epoch 0, of which 3 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:33 kafka | [2024-03-01 23:14:37,099] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-7 in 4 milliseconds for epoch 0, of which 4 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:33 kafka | [2024-03-01 23:14:37,099] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-22 in 3 milliseconds for epoch 0, of which 3 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:33 kafka | [2024-03-01 23:14:37,099] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-29 in 3 milliseconds for epoch 0, of which 3 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:33 kafka | [2024-03-01 23:14:37,099] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-44 in 2 milliseconds for epoch 0, of which 2 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:33 kafka | [2024-03-01 23:14:37,099] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-14 in 2 milliseconds for epoch 0, of which 2 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:33 kafka | [2024-03-01 23:14:37,099] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-23 in 2 milliseconds for epoch 0, of which 2 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:33 kafka | [2024-03-01 23:14:37,097] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-38 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:33 kafka | [2024-03-01 23:14:37,100] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 8 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:33 kafka | [2024-03-01 23:14:37,100] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-38 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:33 kafka | [2024-03-01 23:14:37,100] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-8 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:33 kafka | [2024-03-01 23:14:37,100] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 45 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:33 kafka | [2024-03-01 23:14:37,100] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-45 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:33 kafka | [2024-03-01 23:14:37,100] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-8 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:33 kafka | [2024-03-01 23:14:37,100] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 15 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:33 kafka | [2024-03-01 23:14:37,100] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-15 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:33 kafka | [2024-03-01 23:14:37,100] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 30 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:33 kafka | [2024-03-01 23:14:37,100] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-45 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:33 kafka | [2024-03-01 23:14:37,100] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-30 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:33 kafka | [2024-03-01 23:14:37,100] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 0 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:33 kafka | [2024-03-01 23:14:37,100] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-15 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:33 kafka | [2024-03-01 23:14:37,100] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-0 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:33 kafka | [2024-03-01 23:14:37,100] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 35 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:33 kafka | [2024-03-01 23:14:37,100] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-35 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:33 kafka | [2024-03-01 23:14:37,100] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-30 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:33 kafka | [2024-03-01 23:14:37,100] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 5 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:33 kafka | [2024-03-01 23:14:37,100] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-5 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:33 kafka | [2024-03-01 23:14:37,100] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-0 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:33 kafka | [2024-03-01 23:14:37,100] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 20 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:33 kafka | [2024-03-01 23:14:37,100] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-20 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:33 kafka | [2024-03-01 23:14:37,100] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 27 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:33 kafka | [2024-03-01 23:14:37,100] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-35 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:33 kafka | [2024-03-01 23:14:37,100] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-27 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:33 kafka | [2024-03-01 23:14:37,101] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-5 in 1 milliseconds for epoch 0, of which 1 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:33 kafka | [2024-03-01 23:14:37,101] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 42 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:33 kafka | [2024-03-01 23:14:37,101] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-42 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:33 kafka | [2024-03-01 23:14:37,101] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 12 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:33 kafka | [2024-03-01 23:14:37,101] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-20 in 1 milliseconds for epoch 0, of which 1 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:33 kafka | [2024-03-01 23:14:37,101] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-12 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:33 kafka | [2024-03-01 23:14:37,101] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 21 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:33 kafka | [2024-03-01 23:14:37,101] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-27 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:33 kafka | [2024-03-01 23:14:37,101] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-21 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:33 kafka | [2024-03-01 23:14:37,101] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 36 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:33 kafka | [2024-03-01 23:14:37,101] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-42 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:33 kafka | [2024-03-01 23:14:37,101] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-36 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:33 kafka | [2024-03-01 23:14:37,101] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 6 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:33 kafka | [2024-03-01 23:14:37,101] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-12 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:33 kafka | [2024-03-01 23:14:37,101] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-6 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:33 kafka | [2024-03-01 23:14:37,101] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 43 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:33 kafka | [2024-03-01 23:14:37,101] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-21 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:33 kafka | [2024-03-01 23:14:37,101] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-43 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:33 kafka | [2024-03-01 23:14:37,101] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-36 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:33 kafka | [2024-03-01 23:14:37,101] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 13 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:33 kafka | [2024-03-01 23:14:37,101] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-13 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:33 kafka | [2024-03-01 23:14:37,101] INFO [GroupCoordinator 1]: Elected as the group coordinator for partition 28 in epoch 0 (kafka.coordinator.group.GroupCoordinator) 23:16:33 kafka | [2024-03-01 23:14:37,101] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-6 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:33 kafka | [2024-03-01 23:14:37,101] INFO [GroupMetadataManager brokerId=1] Scheduling loading of offsets and group metadata from __consumer_offsets-28 for epoch 0 (kafka.coordinator.group.GroupMetadataManager) 23:16:33 kafka | [2024-03-01 23:14:37,101] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-43 in 0 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:33 kafka | [2024-03-01 23:14:37,102] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-13 in 1 milliseconds for epoch 0, of which 0 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:33 kafka | [2024-03-01 23:14:37,102] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-28 in 1 milliseconds for epoch 0, of which 1 milliseconds was spent in the scheduler. (kafka.coordinator.group.GroupMetadataManager) 23:16:33 kafka | [2024-03-01 23:14:37,107] INFO [Broker id=1] Finished LeaderAndIsr request in 761ms correlationId 1 from controller 1 for 51 partitions (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:37,110] TRACE [Controller id=1 epoch=1] Received response LeaderAndIsrResponseData(errorCode=0, partitionErrors=[], topics=[LeaderAndIsrTopicError(topicId=qm-KDGJASSik3LmR-JsdxQ, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=13, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=46, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=9, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=42, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=21, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=17, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=30, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=26, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=5, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=38, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=1, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=34, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=16, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=45, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=12, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=41, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=24, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=20, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=49, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=29, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=25, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=8, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=37, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=4, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=33, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=15, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=48, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=11, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=44, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=23, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=19, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=32, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=28, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=7, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=40, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=3, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=36, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=47, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=14, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=43, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=10, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=22, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=18, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=31, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=27, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=39, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=6, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=35, errorCode=0), LeaderAndIsrPartitionError(topicName='', partitionIndex=2, errorCode=0)]), LeaderAndIsrTopicError(topicId=Ov6CtVzQRUKLA8r33TyiiA, partitionErrors=[LeaderAndIsrPartitionError(topicName='', partitionIndex=0, errorCode=0)])]) for request LEADER_AND_ISR with correlation id 1 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:37,116] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='policy-pdp-pap', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition policy-pdp-pap-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:37,116] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=13, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-13 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:37,116] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=46, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-46 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:37,116] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=9, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-9 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:37,117] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=42, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-42 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:37,117] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=21, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-21 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:37,117] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=17, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-17 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:37,117] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=30, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-30 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:37,117] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=26, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-26 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:37,117] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=5, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-5 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:37,117] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=38, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-38 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:37,117] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=1, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-1 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:37,117] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=34, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-34 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:37,117] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=16, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-16 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:37,118] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=45, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-45 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:37,118] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=12, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-12 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:37,118] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=41, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-41 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:37,118] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=24, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-24 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:37,118] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=20, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-20 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:37,118] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=49, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-49 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:37,118] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-0 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:37,118] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=29, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-29 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:37,118] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=25, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-25 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:37,118] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=8, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-8 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:37,118] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=37, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-37 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:37,118] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=4, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-4 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:37,118] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=33, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-33 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:37,118] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=15, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-15 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:37,119] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=48, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-48 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:37,119] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=11, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-11 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:37,119] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=44, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-44 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:37,119] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=23, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-23 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:37,119] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=19, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-19 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:37,119] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=32, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-32 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:37,119] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=28, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-28 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:37,119] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=7, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-7 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:37,119] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=40, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-40 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:37,119] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=3, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-3 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:37,119] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=36, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-36 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:37,119] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=47, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-47 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:37,119] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=14, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-14 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:37,119] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=43, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-43 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:37,119] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=10, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-10 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:37,119] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=22, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-22 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:37,119] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=18, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-18 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:37,119] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=31, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-31 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:37,119] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=27, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-27 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:37,119] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=39, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-39 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:37,119] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=6, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-6 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:37,119] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=35, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-35 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:37,119] TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='__consumer_offsets', partitionIndex=2, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1], zkVersion=0, replicas=[1], offlineReplicas=[]) for partition __consumer_offsets-2 in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:37,120] INFO [Broker id=1] Add 51 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 1 epoch 1 with correlation id 2 (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:37,123] TRACE [Controller id=1 epoch=1] Received response UpdateMetadataResponseData(errorCode=0) for request UPDATE_METADATA with correlation id 2 sent to broker kafka:9092 (id: 1 rack: null) (state.change.logger) 23:16:33 kafka | [2024-03-01 23:14:37,217] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group b06317e2-ac80-4179-891e-43beb77f3709 in Empty state. Created a new member id consumer-b06317e2-ac80-4179-891e-43beb77f3709-3-b170df36-31f5-40b4-8d44-edc08e3f3a00 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 23:16:33 kafka | [2024-03-01 23:14:37,217] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group policy-pap in Empty state. Created a new member id consumer-policy-pap-4-ebfd4786-b372-4cbe-8078-b32f1e613bd1 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 23:16:33 kafka | [2024-03-01 23:14:37,229] INFO [GroupCoordinator 1]: Preparing to rebalance group b06317e2-ac80-4179-891e-43beb77f3709 in state PreparingRebalance with old generation 0 (__consumer_offsets-49) (reason: Adding new member consumer-b06317e2-ac80-4179-891e-43beb77f3709-3-b170df36-31f5-40b4-8d44-edc08e3f3a00 with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) 23:16:33 kafka | [2024-03-01 23:14:37,230] INFO [GroupCoordinator 1]: Preparing to rebalance group policy-pap in state PreparingRebalance with old generation 0 (__consumer_offsets-24) (reason: Adding new member consumer-policy-pap-4-ebfd4786-b372-4cbe-8078-b32f1e613bd1 with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) 23:16:33 kafka | [2024-03-01 23:14:37,857] INFO [GroupCoordinator 1]: Dynamic member with unknown member id joins group d5634529-e7dd-41ae-91a6-87fa8cb77024 in Empty state. Created a new member id consumer-d5634529-e7dd-41ae-91a6-87fa8cb77024-2-ce322a6e-d562-4bb5-a0de-80fe00c55a56 and request the member to rejoin with this id. (kafka.coordinator.group.GroupCoordinator) 23:16:33 kafka | [2024-03-01 23:14:37,861] INFO [GroupCoordinator 1]: Preparing to rebalance group d5634529-e7dd-41ae-91a6-87fa8cb77024 in state PreparingRebalance with old generation 0 (__consumer_offsets-9) (reason: Adding new member consumer-d5634529-e7dd-41ae-91a6-87fa8cb77024-2-ce322a6e-d562-4bb5-a0de-80fe00c55a56 with group instance id None; client reason: rebalance failed due to MemberIdRequiredException) (kafka.coordinator.group.GroupCoordinator) 23:16:33 kafka | [2024-03-01 23:14:40,264] INFO [GroupCoordinator 1]: Stabilized group b06317e2-ac80-4179-891e-43beb77f3709 generation 1 (__consumer_offsets-49) with 1 members (kafka.coordinator.group.GroupCoordinator) 23:16:33 kafka | [2024-03-01 23:14:40,270] INFO [GroupCoordinator 1]: Stabilized group policy-pap generation 1 (__consumer_offsets-24) with 1 members (kafka.coordinator.group.GroupCoordinator) 23:16:33 kafka | [2024-03-01 23:14:40,290] INFO [GroupCoordinator 1]: Assignment received from leader consumer-policy-pap-4-ebfd4786-b372-4cbe-8078-b32f1e613bd1 for group policy-pap for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 23:16:33 kafka | [2024-03-01 23:14:40,290] INFO [GroupCoordinator 1]: Assignment received from leader consumer-b06317e2-ac80-4179-891e-43beb77f3709-3-b170df36-31f5-40b4-8d44-edc08e3f3a00 for group b06317e2-ac80-4179-891e-43beb77f3709 for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 23:16:33 kafka | [2024-03-01 23:14:40,863] INFO [GroupCoordinator 1]: Stabilized group d5634529-e7dd-41ae-91a6-87fa8cb77024 generation 1 (__consumer_offsets-9) with 1 members (kafka.coordinator.group.GroupCoordinator) 23:16:33 kafka | [2024-03-01 23:14:40,881] INFO [GroupCoordinator 1]: Assignment received from leader consumer-d5634529-e7dd-41ae-91a6-87fa8cb77024-2-ce322a6e-d562-4bb5-a0de-80fe00c55a56 for group d5634529-e7dd-41ae-91a6-87fa8cb77024 for generation 1. The group has 1 members, 0 of which are static. (kafka.coordinator.group.GroupCoordinator) 23:16:33 ++ echo 'Tearing down containers...' 23:16:33 Tearing down containers... 23:16:33 ++ docker-compose down -v --remove-orphans 23:16:34 Stopping policy-apex-pdp ... 23:16:34 Stopping policy-pap ... 23:16:34 Stopping policy-api ... 23:16:34 Stopping kafka ... 23:16:34 Stopping grafana ... 23:16:34 Stopping compose_zookeeper_1 ... 23:16:34 Stopping mariadb ... 23:16:34 Stopping prometheus ... 23:16:34 Stopping simulator ... 23:16:35 Stopping grafana ... done 23:16:35 Stopping prometheus ... done 23:16:44 Stopping policy-apex-pdp ... done 23:16:55 Stopping simulator ... done 23:16:55 Stopping policy-pap ... done 23:16:56 Stopping mariadb ... done 23:16:56 Stopping kafka ... done 23:16:56 Stopping compose_zookeeper_1 ... done 23:17:05 Stopping policy-api ... done 23:17:05 Removing policy-apex-pdp ... 23:17:05 Removing policy-pap ... 23:17:05 Removing policy-api ... 23:17:05 Removing policy-db-migrator ... 23:17:05 Removing kafka ... 23:17:05 Removing grafana ... 23:17:05 Removing compose_zookeeper_1 ... 23:17:05 Removing mariadb ... 23:17:05 Removing prometheus ... 23:17:05 Removing simulator ... 23:17:05 Removing policy-db-migrator ... done 23:17:05 Removing policy-api ... done 23:17:05 Removing policy-pap ... done 23:17:05 Removing compose_zookeeper_1 ... done 23:17:05 Removing simulator ... done 23:17:05 Removing grafana ... done 23:17:05 Removing prometheus ... done 23:17:05 Removing policy-apex-pdp ... done 23:17:05 Removing mariadb ... done 23:17:05 Removing kafka ... done 23:17:05 Removing network compose_default 23:17:05 ++ cd /w/workspace/policy-pap-master-project-csit-pap 23:17:05 + load_set 23:17:05 + _setopts=hxB 23:17:05 ++ echo braceexpand:hashall:interactive-comments:xtrace 23:17:05 ++ tr : ' ' 23:17:05 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:17:05 + set +o braceexpand 23:17:05 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:17:05 + set +o hashall 23:17:05 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:17:05 + set +o interactive-comments 23:17:05 + for i in $(echo "${SHELLOPTS}" | tr ':' ' ') 23:17:05 + set +o xtrace 23:17:05 ++ echo hxB 23:17:05 ++ sed 's/./& /g' 23:17:05 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:17:05 + set +h 23:17:05 + for i in $(echo "$_setopts" | sed 's/./& /g') 23:17:05 + set +x 23:17:05 + [[ -n /tmp/tmp.h1E7S28y7V ]] 23:17:05 + rsync -av /tmp/tmp.h1E7S28y7V/ /w/workspace/policy-pap-master-project-csit-pap/csit/archives/pap 23:17:05 sending incremental file list 23:17:05 ./ 23:17:05 log.html 23:17:05 output.xml 23:17:05 report.html 23:17:05 testplan.txt 23:17:05 23:17:05 sent 919,560 bytes received 95 bytes 1,839,310.00 bytes/sec 23:17:05 total size is 919,014 speedup is 1.00 23:17:05 + rm -rf /w/workspace/policy-pap-master-project-csit-pap/models 23:17:06 + exit 0 23:17:06 $ ssh-agent -k 23:17:06 unset SSH_AUTH_SOCK; 23:17:06 unset SSH_AGENT_PID; 23:17:06 echo Agent pid 2084 killed; 23:17:06 [ssh-agent] Stopped. 23:17:06 Robot results publisher started... 23:17:06 INFO: Checking test criticality is deprecated and will be dropped in a future release! 23:17:06 -Parsing output xml: 23:17:06 Done! 23:17:06 WARNING! Could not find file: **/log.html 23:17:06 WARNING! Could not find file: **/report.html 23:17:06 -Copying log files to build dir: 23:17:06 Done! 23:17:06 -Assigning results to build: 23:17:06 Done! 23:17:06 -Checking thresholds: 23:17:06 Done! 23:17:06 Done publishing Robot results. 23:17:06 [PostBuildScript] - [INFO] Executing post build scripts. 23:17:06 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins162756540799372044.sh 23:17:06 ---> sysstat.sh 23:17:07 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins15151093833306479894.sh 23:17:07 ---> package-listing.sh 23:17:07 ++ facter osfamily 23:17:07 ++ tr '[:upper:]' '[:lower:]' 23:17:07 + OS_FAMILY=debian 23:17:07 + workspace=/w/workspace/policy-pap-master-project-csit-pap 23:17:07 + START_PACKAGES=/tmp/packages_start.txt 23:17:07 + END_PACKAGES=/tmp/packages_end.txt 23:17:07 + DIFF_PACKAGES=/tmp/packages_diff.txt 23:17:07 + PACKAGES=/tmp/packages_start.txt 23:17:07 + '[' /w/workspace/policy-pap-master-project-csit-pap ']' 23:17:07 + PACKAGES=/tmp/packages_end.txt 23:17:07 + case "${OS_FAMILY}" in 23:17:07 + dpkg -l 23:17:07 + grep '^ii' 23:17:07 + '[' -f /tmp/packages_start.txt ']' 23:17:07 + '[' -f /tmp/packages_end.txt ']' 23:17:07 + diff /tmp/packages_start.txt /tmp/packages_end.txt 23:17:07 + '[' /w/workspace/policy-pap-master-project-csit-pap ']' 23:17:07 + mkdir -p /w/workspace/policy-pap-master-project-csit-pap/archives/ 23:17:07 + cp -f /tmp/packages_diff.txt /tmp/packages_end.txt /tmp/packages_start.txt /w/workspace/policy-pap-master-project-csit-pap/archives/ 23:17:07 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins15089204340955608805.sh 23:17:07 ---> capture-instance-metadata.sh 23:17:07 Setup pyenv: 23:17:07 system 23:17:07 3.8.13 23:17:07 3.9.13 23:17:07 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) 23:17:07 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-6x4h from file:/tmp/.os_lf_venv 23:17:09 lf-activate-venv(): INFO: Installing: lftools 23:17:19 lf-activate-venv(): INFO: Adding /tmp/venv-6x4h/bin to PATH 23:17:19 INFO: Running in OpenStack, capturing instance metadata 23:17:20 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins9158816102445977896.sh 23:17:20 provisioning config files... 23:17:20 copy managed file [jenkins-log-archives-settings] to file:/w/workspace/policy-pap-master-project-csit-pap@tmp/config9518412275156764119tmp 23:17:20 Regular expression run condition: Expression=[^.*logs-s3.*], Label=[] 23:17:20 Run condition [Regular expression match] preventing perform for step [Provide Configuration files] 23:17:20 [EnvInject] - Injecting environment variables from a build step. 23:17:20 [EnvInject] - Injecting as environment variables the properties content 23:17:20 SERVER_ID=logs 23:17:20 23:17:20 [EnvInject] - Variables injected successfully. 23:17:20 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins1313798731519703148.sh 23:17:20 ---> create-netrc.sh 23:17:20 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins911902395853409526.sh 23:17:20 ---> python-tools-install.sh 23:17:20 Setup pyenv: 23:17:20 system 23:17:20 3.8.13 23:17:20 3.9.13 23:17:20 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) 23:17:20 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-6x4h from file:/tmp/.os_lf_venv 23:17:21 lf-activate-venv(): INFO: Installing: lftools 23:17:29 lf-activate-venv(): INFO: Adding /tmp/venv-6x4h/bin to PATH 23:17:29 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins4346386011406522355.sh 23:17:29 ---> sudo-logs.sh 23:17:29 Archiving 'sudo' log.. 23:17:30 [policy-pap-master-project-csit-pap] $ /bin/bash /tmp/jenkins7203077978918115047.sh 23:17:30 ---> job-cost.sh 23:17:30 Setup pyenv: 23:17:30 system 23:17:30 3.8.13 23:17:30 3.9.13 23:17:30 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) 23:17:30 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-6x4h from file:/tmp/.os_lf_venv 23:17:31 lf-activate-venv(): INFO: Installing: zipp==1.1.0 python-openstackclient urllib3~=1.26.15 23:17:37 lf-activate-venv(): INFO: Adding /tmp/venv-6x4h/bin to PATH 23:17:37 INFO: No Stack... 23:17:37 INFO: Retrieving Pricing Info for: v3-standard-8 23:17:38 INFO: Archiving Costs 23:17:38 [policy-pap-master-project-csit-pap] $ /bin/bash -l /tmp/jenkins14436961549963754081.sh 23:17:38 ---> logs-deploy.sh 23:17:38 Setup pyenv: 23:17:38 system 23:17:38 3.8.13 23:17:38 3.9.13 23:17:38 * 3.10.6 (set by /w/workspace/policy-pap-master-project-csit-pap/.python-version) 23:17:38 lf-activate-venv(): INFO: Reuse venv:/tmp/venv-6x4h from file:/tmp/.os_lf_venv 23:17:39 lf-activate-venv(): INFO: Installing: lftools 23:17:47 lf-activate-venv(): INFO: Adding /tmp/venv-6x4h/bin to PATH 23:17:47 INFO: Nexus URL https://nexus.onap.org path production/vex-yul-ecomp-jenkins-1/policy-pap-master-project-csit-pap/1596 23:17:47 INFO: archiving workspace using pattern(s): -p **/target/surefire-reports/*-output.txt 23:17:48 Archives upload complete. 23:17:48 INFO: archiving logs to Nexus 23:17:49 ---> uname -a: 23:17:49 Linux prd-ubuntu1804-docker-8c-8g-10134 4.15.0-192-generic #203-Ubuntu SMP Wed Aug 10 17:40:03 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux 23:17:49 23:17:49 23:17:49 ---> lscpu: 23:17:49 Architecture: x86_64 23:17:49 CPU op-mode(s): 32-bit, 64-bit 23:17:49 Byte Order: Little Endian 23:17:49 CPU(s): 8 23:17:49 On-line CPU(s) list: 0-7 23:17:49 Thread(s) per core: 1 23:17:49 Core(s) per socket: 1 23:17:49 Socket(s): 8 23:17:49 NUMA node(s): 1 23:17:49 Vendor ID: AuthenticAMD 23:17:49 CPU family: 23 23:17:49 Model: 49 23:17:49 Model name: AMD EPYC-Rome Processor 23:17:49 Stepping: 0 23:17:49 CPU MHz: 2799.998 23:17:49 BogoMIPS: 5599.99 23:17:49 Virtualization: AMD-V 23:17:49 Hypervisor vendor: KVM 23:17:49 Virtualization type: full 23:17:49 L1d cache: 32K 23:17:49 L1i cache: 32K 23:17:49 L2 cache: 512K 23:17:49 L3 cache: 16384K 23:17:49 NUMA node0 CPU(s): 0-7 23:17:49 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl xtopology cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves clzero xsaveerptr arat npt nrip_save umip rdpid arch_capabilities 23:17:49 23:17:49 23:17:49 ---> nproc: 23:17:49 8 23:17:49 23:17:49 23:17:49 ---> df -h: 23:17:49 Filesystem Size Used Avail Use% Mounted on 23:17:49 udev 16G 0 16G 0% /dev 23:17:49 tmpfs 3.2G 708K 3.2G 1% /run 23:17:49 /dev/vda1 155G 14G 142G 9% / 23:17:49 tmpfs 16G 0 16G 0% /dev/shm 23:17:49 tmpfs 5.0M 0 5.0M 0% /run/lock 23:17:49 tmpfs 16G 0 16G 0% /sys/fs/cgroup 23:17:49 /dev/vda15 105M 4.4M 100M 5% /boot/efi 23:17:49 tmpfs 3.2G 0 3.2G 0% /run/user/1001 23:17:49 23:17:49 23:17:49 ---> free -m: 23:17:49 total used free shared buff/cache available 23:17:49 Mem: 32167 825 25134 0 6207 30886 23:17:49 Swap: 1023 0 1023 23:17:49 23:17:49 23:17:49 ---> ip addr: 23:17:49 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 23:17:49 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 23:17:49 inet 127.0.0.1/8 scope host lo 23:17:49 valid_lft forever preferred_lft forever 23:17:49 inet6 ::1/128 scope host 23:17:49 valid_lft forever preferred_lft forever 23:17:49 2: ens3: mtu 1458 qdisc mq state UP group default qlen 1000 23:17:49 link/ether fa:16:3e:2b:6a:d7 brd ff:ff:ff:ff:ff:ff 23:17:49 inet 10.30.107.168/23 brd 10.30.107.255 scope global dynamic ens3 23:17:49 valid_lft 85951sec preferred_lft 85951sec 23:17:49 inet6 fe80::f816:3eff:fe2b:6ad7/64 scope link 23:17:49 valid_lft forever preferred_lft forever 23:17:49 3: docker0: mtu 1500 qdisc noqueue state DOWN group default 23:17:49 link/ether 02:42:45:40:d3:d8 brd ff:ff:ff:ff:ff:ff 23:17:49 inet 10.250.0.254/24 brd 10.250.0.255 scope global docker0 23:17:49 valid_lft forever preferred_lft forever 23:17:49 23:17:49 23:17:49 ---> sar -b -r -n DEV: 23:17:49 Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-10134) 03/01/24 _x86_64_ (8 CPU) 23:17:49 23:17:49 23:10:23 LINUX RESTART (8 CPU) 23:17:49 23:17:49 23:11:02 tps rtps wtps bread/s bwrtn/s 23:17:49 23:12:01 99.92 18.13 81.78 1041.32 22452.33 23:17:49 23:13:01 135.54 23.71 111.83 2791.27 28956.37 23:17:49 23:14:01 396.35 8.40 387.95 514.81 147087.42 23:17:49 23:15:01 152.17 6.67 145.51 291.55 21799.33 23:17:49 23:16:01 13.73 0.00 13.73 0.00 15010.23 23:17:49 23:17:01 54.09 0.08 54.01 9.20 16722.61 23:17:49 Average: 142.08 9.47 132.61 773.95 42059.17 23:17:49 23:17:49 23:11:02 kbmemfree kbavail kbmemused %memused kbbuffers kbcached kbcommit %commit kbactive kbinact kbdirty 23:17:49 23:12:01 30126408 31735872 2812812 8.54 71188 1848544 1427276 4.20 839020 1684016 166736 23:17:49 23:13:01 28142676 31689124 4796544 14.56 110148 3677084 1573324 4.63 974088 3413824 1620428 23:17:49 23:14:01 25337248 31249916 7601972 23.08 149832 5880512 5335400 15.70 1500852 5527916 364 23:17:49 23:15:01 23576224 29617112 9362996 28.43 156532 5991944 8828272 25.97 3254332 5505344 328 23:17:49 23:16:01 23574076 29615452 9365144 28.43 156676 5992172 8769468 25.80 3258760 5503116 208 23:17:49 23:17:01 25079136 31138748 7860084 23.86 157644 6019304 2378444 7.00 1806064 5499104 264 23:17:49 Average: 25972628 30841037 6966592 21.15 133670 4901593 4718697 13.88 1938853 4522220 298055 23:17:49 23:17:49 23:11:02 IFACE rxpck/s txpck/s rxkB/s txkB/s rxcmp/s txcmp/s rxmcst/s %ifutil 23:17:49 23:12:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:17:49 23:12:01 lo 1.69 1.69 0.18 0.18 0.00 0.00 0.00 0.00 23:17:49 23:12:01 ens3 66.11 44.50 1017.64 8.11 0.00 0.00 0.00 0.00 23:17:49 23:13:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:17:49 23:13:01 lo 7.80 7.80 0.72 0.72 0.00 0.00 0.00 0.00 23:17:49 23:13:01 ens3 336.16 209.65 10554.63 24.20 0.00 0.00 0.00 0.00 23:17:49 23:13:01 br-046ca749ae0d 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:17:49 23:14:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:17:49 23:14:01 vethd5df93a 0.00 0.03 0.00 0.00 0.00 0.00 0.00 0.00 23:17:49 23:14:01 lo 5.73 5.73 0.58 0.58 0.00 0.00 0.00 0.00 23:17:49 23:14:01 veth964046b 0.00 0.15 0.00 0.01 0.00 0.00 0.00 0.00 23:17:49 23:15:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:17:49 23:15:01 vethd5df93a 14.73 13.40 1.96 1.93 0.00 0.00 0.00 0.00 23:17:49 23:15:01 lo 3.05 3.05 2.46 2.46 0.00 0.00 0.00 0.00 23:17:49 23:15:01 veth964046b 0.00 0.28 0.00 0.01 0.00 0.00 0.00 0.00 23:17:49 23:16:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:17:49 23:16:01 vethd5df93a 13.93 9.38 1.06 1.34 0.00 0.00 0.00 0.00 23:17:49 23:16:01 lo 6.03 6.03 1.37 1.37 0.00 0.00 0.00 0.00 23:17:49 23:16:01 veth964046b 0.00 0.02 0.00 0.00 0.00 0.00 0.00 0.00 23:17:49 23:17:01 docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:17:49 23:17:01 lo 6.70 6.70 0.54 0.54 0.00 0.00 0.00 0.00 23:17:49 23:17:01 ens3 1533.71 910.00 33907.41 139.24 0.00 0.00 0.00 0.00 23:17:49 23:17:01 veth1c605a3 54.06 48.39 20.46 40.51 0.00 0.00 0.00 0.00 23:17:49 Average: docker0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 23:17:49 Average: lo 5.18 5.18 0.98 0.98 0.00 0.00 0.00 0.00 23:17:49 Average: ens3 208.09 117.37 5538.06 13.93 0.00 0.00 0.00 0.00 23:17:49 Average: veth1c605a3 9.03 8.09 3.42 6.77 0.00 0.00 0.00 0.00 23:17:49 23:17:49 23:17:49 ---> sar -P ALL: 23:17:49 Linux 4.15.0-192-generic (prd-ubuntu1804-docker-8c-8g-10134) 03/01/24 _x86_64_ (8 CPU) 23:17:49 23:17:49 23:10:23 LINUX RESTART (8 CPU) 23:17:49 23:17:49 23:11:02 CPU %user %nice %system %iowait %steal %idle 23:17:49 23:12:01 all 9.94 0.00 0.70 2.06 0.05 87.25 23:17:49 23:12:01 0 19.63 0.00 1.03 1.44 0.03 77.86 23:17:49 23:12:01 1 13.36 0.00 0.97 0.34 0.10 85.23 23:17:49 23:12:01 2 22.45 0.00 1.59 4.33 0.03 71.60 23:17:49 23:12:01 3 4.36 0.00 0.29 2.94 0.07 92.34 23:17:49 23:12:01 4 4.66 0.00 0.49 0.46 0.07 94.32 23:17:49 23:12:01 5 2.95 0.00 0.25 0.22 0.02 96.56 23:17:49 23:12:01 6 6.95 0.00 0.39 4.42 0.03 88.21 23:17:49 23:12:01 7 5.14 0.00 0.54 2.39 0.05 91.88 23:17:49 23:13:01 all 12.10 0.00 2.51 2.20 0.05 83.13 23:17:49 23:13:01 0 5.56 0.00 2.58 4.04 0.02 87.81 23:17:49 23:13:01 1 10.88 0.00 1.89 0.25 0.07 86.92 23:17:49 23:13:01 2 23.40 0.00 3.35 0.84 0.05 72.35 23:17:49 23:13:01 3 7.36 0.00 2.16 6.94 0.03 83.51 23:17:49 23:13:01 4 24.86 0.00 3.77 1.66 0.05 69.65 23:17:49 23:13:01 5 13.69 0.00 2.66 0.32 0.03 83.30 23:17:49 23:13:01 6 4.16 0.00 1.61 3.38 0.02 90.84 23:17:49 23:13:01 7 7.00 0.00 2.06 0.17 0.07 90.70 23:17:49 23:14:01 all 11.28 0.00 4.56 8.60 0.06 75.49 23:17:49 23:14:01 0 9.47 0.00 6.50 42.15 0.08 41.80 23:17:49 23:14:01 1 10.70 0.00 3.61 0.56 0.07 85.07 23:17:49 23:14:01 2 11.70 0.00 4.44 1.98 0.07 81.82 23:17:49 23:14:01 3 13.07 0.00 4.71 1.95 0.05 80.22 23:17:49 23:14:01 4 12.59 0.00 4.82 3.40 0.07 79.12 23:17:49 23:14:01 5 11.45 0.00 4.32 1.00 0.05 83.18 23:17:49 23:14:01 6 9.79 0.00 3.39 8.16 0.05 78.61 23:17:49 23:14:01 7 11.48 0.00 4.77 9.65 0.07 74.04 23:17:49 23:15:01 all 25.35 0.00 2.82 1.69 0.08 70.07 23:17:49 23:15:01 0 26.66 0.00 2.58 3.32 0.08 67.35 23:17:49 23:15:01 1 22.04 0.00 2.67 3.72 0.10 71.46 23:17:49 23:15:01 2 20.78 0.00 2.44 0.57 0.08 76.12 23:17:49 23:15:01 3 28.04 0.00 3.20 0.64 0.07 68.06 23:17:49 23:15:01 4 27.08 0.00 2.88 0.27 0.07 69.71 23:17:49 23:15:01 5 23.38 0.00 2.65 1.11 0.07 72.80 23:17:49 23:15:01 6 24.51 0.00 2.55 1.52 0.08 71.34 23:17:49 23:15:01 7 30.26 0.00 3.63 2.36 0.08 63.67 23:17:49 23:16:01 all 3.80 0.00 0.36 0.84 0.07 94.93 23:17:49 23:16:01 0 4.40 0.00 0.30 0.08 0.07 95.16 23:17:49 23:16:01 1 2.74 0.00 0.22 6.44 0.10 90.50 23:17:49 23:16:01 2 5.24 0.00 0.45 0.00 0.05 94.26 23:17:49 23:16:01 3 3.45 0.00 0.23 0.05 0.07 96.20 23:17:49 23:16:01 4 4.12 0.00 0.38 0.00 0.05 95.45 23:17:49 23:16:01 5 2.92 0.00 0.43 0.00 0.07 96.58 23:17:49 23:16:01 6 4.52 0.00 0.45 0.17 0.05 94.81 23:17:49 23:16:01 7 3.04 0.00 0.37 0.02 0.05 96.53 23:17:49 23:17:01 all 1.40 0.00 0.48 1.10 0.06 96.95 23:17:49 23:17:01 0 1.43 0.00 0.40 0.57 0.03 97.57 23:17:49 23:17:01 1 1.35 0.00 0.47 7.24 0.08 90.86 23:17:49 23:17:01 2 1.00 0.00 0.42 0.10 0.08 98.40 23:17:49 23:17:01 3 0.72 0.00 0.52 0.42 0.05 98.30 23:17:49 23:17:01 4 1.22 0.00 0.57 0.10 0.05 98.06 23:17:49 23:17:01 5 2.43 0.00 0.52 0.07 0.07 96.92 23:17:49 23:17:01 6 1.67 0.00 0.50 0.03 0.05 97.75 23:17:49 23:17:01 7 1.39 0.00 0.47 0.32 0.07 97.76 23:17:49 Average: all 10.63 0.00 1.90 2.74 0.06 84.67 23:17:49 Average: 0 11.15 0.00 2.22 8.51 0.05 78.07 23:17:49 Average: 1 10.16 0.00 1.63 3.11 0.09 85.02 23:17:49 Average: 2 14.06 0.00 2.11 1.29 0.06 82.48 23:17:49 Average: 3 9.49 0.00 1.85 2.15 0.06 86.46 23:17:49 Average: 4 12.42 0.00 2.15 0.98 0.06 84.39 23:17:49 Average: 5 9.47 0.00 1.80 0.45 0.05 88.23 23:17:49 Average: 6 8.59 0.00 1.48 2.93 0.05 86.95 23:17:49 Average: 7 9.72 0.00 1.97 2.47 0.06 85.78 23:17:49 23:17:49 23:17:49